We hate to break it to you, but recommendation engines aren’t optimizing your application. 


Let’s break down the difference between a recommendation engine and an autonomous engine. 

 A recommendation engine is simply a reporting mechanism that tells you that there is something wrong. Let’s say you’re spending too much money on a particular application or service in your cloud; a recommendation engine will notify you of this. Autonomous optimization is the next generation. It takes action to remediate a problem and does so in an autonomous manner. AI and ML tune your application for the best performance at the lowest cost.

What’s the problem with recommendation engines?

 Post-processing. This means that at the end of the month, you get this long, detailed report. Then someone has to go through this report to figure out which services are costing what and how it relates to the application. Unless you have a really sophisticated tagging setup, it’s tough to charge back and figure out what an application really costs. 

Opsani optimizes at a service level.  

This means we optimize service by service and can give you accurate accounts of what your service is costing. We are optimizing that service autonomously and continuously so that your application is continually updated for its running and application parameters to get the highest performance at the lowest cost.

This is where optimization 1.0 and optimization 2.0 come in.

1.0 optimization products like Cloudhealth, Cloudability, Trusted Advisor, or any of those tools tell you there’s a problem; they don’t do anything about it.  Whereas cloud optimization 2.0 attempts to remediate the problem. Opsani not only remediates the problem, but we optimize it. We take care of the problem so your application is costing you less and performing better. This loops in reliability because your application becomes tuned to make sure you have the right configuration at the right time. 

How does this cloud optimization 2.0 actually work?

Opsani uses artificial intelligence and machine learning to analyze your application’s load and performance metrics continuously. As your application is running throughout the day, we are gathering those performance metrics from the APM. We are tuning the application parameters such as CPU, memory, and replica counts to ensure that your application is sized and scaled according to the load profile that the application is actually experiencing. In addition to tuning app-specific parameters, like CPU, memory, and replica counts, we can also tune things like worker threads, cash sizes, memory allocations. These are the app-specific parameters. By tuning these, we can tune the app for optimum performance for the available resources tuned for the application’s current load profile. Check out our last blog post, Opsani was recognized as a Gartner 2020 Cool Cloud Vendor for more Opsani info!