deep reinforcement learning

Why aren’t more enterprises doing real application performance tuning in their cloud apps?

Short answer: it’s too hard. 

People know that altering container or VM resource settings can have a huge impact on the cost and performance of cloud applications. But modern cloud-native microservice architectures are daunting and complicated. Even a simple app, composed of a few containers, will still have trillions of resources and basic parameter permutations. 

This means a massive proliferation of possible configuration tweaks. Not only changes to basic parameters, but to more minor ones like requests per second and response time, as well as settings VM instance type, CPU shares, thread count, garbage collection and memory pool sizes. 

Getting the right instance, the right amount of instances, and the right settings in each instance involve numerous interdependencies (that are also constantly shifting). And so to confidently intervene in this complex system – to properly tune performance – you would have to possess perfect knowledge of the whole infrastructure, covering layers across the application, data, and cloud infrastructure stack. On top of this, you would need to be deeply familiar with the application workload itself. 

It is highly unlikely that any person within an organization possesses all of this knowledge. Whoever wrote the code probably won’t be an infrastructure nerd. Anyone who is comfortable with both will almost always be a generalist. In summary: real application performance tuning is beyond the reach of the human mind. We need help from artificial sorts of cognition.

And that is why Opsani uses Deep Reinforcement Learning.

Deep reinforcement learning (DRL) is a branch of machine learning powered by neural networks – digital systems that mirror the human brain’s instinct for pattern recognition. DRL converts observed patterns and learned responses into ever more refined algorithmic behavior. 

In Opsani’s case, we pay close attention to how movements in every sort of setting affect cloud app performance, and tweak resource assignments and configuration settings to reduce cost or enhance performance. Data and the effect of every alteration are fed through the perfect recall of the neural network, and the quality of Opsani’s interventions compound over time.View Demo

Furthermore, OpsaniAI uses deep reinforcement learning to optimize those settings that are too complex for humans to touch: middleware configuration variables like JVM GC type and pool sizes, kernel parameters like page sizes and jumbo packet sizes, and application parameters like thread pools, cache timeouts and write delays. 

Because it is constantly gathering new data, Opsani constantly discovers more solutions.

The engine reacts constantly to new traffic patterns, new code, new instance types, and all other relevant factors. With each new iteration, the system’s predictions hone in on the optimal solution, and as improvements are found they are automatically promoted. This is called Cloud Optimization.

Advanced application performance tuning has historically been judged impossible, too complex to contemplate. No longer. OpsaniAI is simple to integrate and gets up and running with a Docker run command. Most users see the benefits within hours.

On average, when they implement CO, Opsani customers experience a 2.5x increase in performance or a 40-70% decrease in cost.

To learn more about Cloud Optimization, check out our whitepaper:

Download Continuous Optimization Whitepaper