The Hundreds of App Parameters You Are Failing to Optimize

Lots of people claim to be optimizing their apps. But in reality, they are barely scratching the surface. The vast majority of optimization tools focus on code and the app layer: UI, database schema, and so on. Application Performance Monitoring (APM) systems monitor basic app transactions, and track top-layer usage statistics. At most, they might offer some broad recommendations about how to reduce bottlenecks. 

But the thousands of Linux parameters, the multiple pieces of middleware, the cloud and container layers – most people don’t go near these. Even experienced application architecture people are frequently unable to identify more than a third of the parameters available to them. They leave the settings at default, and over-spend and over-provision to ensure peace of mind. Some companies just buy all the resources they can afford. Which is great for Amazon, Microsoft and Google. But what does it mean for the company? Millions of dollars spent needlessly, and performance potential left untapped because of the failure to properly optimize. 

Here are some examples of how 90% of people routinely fail to properly optimize:

  • Failing to test their latest instance types. 
  • Failing to test their high IOPs volumes. 
  • Not checking whether the application is sensitive to stolen CPU.
  • When using a JVM, not testing different garbage collection settings. 
  • Not pinpointing the ideal setting thread count.
  • Not experimenting with disabling Linux’s noatime. 
  • If using PHP, not experimenting with different values for min_spare_servers and start_servers. Also, not using an on-demand rather than a dynamic process manager.
  • Not tweaking kernel parameters like page sizes and jumbo packet sizes.
  • Not tweaking application parameters like thread pools, cache timeouts and write delays.
  • Not testing buffer sizes.
  • Not testing API traffic limiters. 
  • Or file descriptor limits.
  • Or cache size.
  • And literally thousands more examples.

Why? Why are we failing to optimize these hundreds of important app parameters?

In short: because it is too complicated. In the DevOps era, even a simple application with a handful of containers can have quite literally trillions of resource and basic parameter permutations. Not only are these permutations numerous; they are interdependent, and perpetually shifting. To effectively engage with them, you would need a flawless knowledge of the entire application infrastructure, and the workload itself. 

This is why so many people are failing to optimize their apps properly. It’s incredibly, dauntingly complex. And tweaking blindly is very risky. So the safe option is not to touch anything. 

Opsani’s Continuous Optimization

This is why we built Opsani. Opsani performs Continuous Optimization of apps at the deployment level. It slots into the CI/CD chain, and leverages deep reinforcement learning to continuously examines millions of configurations, and identify the optimal combination of resources and parameter settings. Opsani react constantly to new traffic patterns, new code, new instance types, and all other relevant factors. It learns as it goes, so the improvements get better and better.

Opsani dives right into all those app parameters that are routinely ignored, left to run at too high a cost and too low an impact. The end result is that infrastructure is tuned precisely to the workload and goals of the application – whether those goals relate to cost, performance, or some balance of the two. On average, when they implement CO, Opsani customers experience a 2.5x increase in performance, or a 40-70% decrease in cost. Overnight.

Contact us today for a free demo.

Share This