Working at Opsani is guaranteed to make you appreciate the incredible capabilities of AI and machine learning. Seeing into the technology, you get a firsthand view of how the Opsani platform achieves unprecedented levels of both cloud and performance optimization.

But often when I  speak to non-users about our product, they harbour misconceptions about how it works and what it does. Perhaps they have experience with another aspiring cloud optimization platform. Perhaps they think cloud optimization is the same thing as APM. Whatever the reason, they often ask questions that clearly show they don’t understand Opsani.

Here are three big myths about Opsani, and why those myths get it so wrong.

Myth #1: Opsani is just another “notification” solution

Often talking to potential users, we hear some version of: “So your platform gives us notifications and suggestions on what changes to make, is that right?”

Answer: No! Granted, this is what many optimization tools do: notify you about basic issues so you can take action. AWS has CloudHealth, which does this and the advice tends to be basic. Lots of performance optimization tools out there basically say “if you turn off all these instances, it will help you sleep on your cloud bill.” For the most part, the notifications these tools offer are only giving you hints to a piece of the puzzle.

Opsani is different. How? Well, we go way beyond notifications. Aside from providing you helpful insights from instances, Opsani also digs deep and monitors runtime settings, application parameters, and configuration that are often deemed too “difficult to deal with.” I mean things like garbage collection time, heap size, compute, memory, and more.

Moreover, these insights from the application parameters, resources, and settings can be set based on your preferences. You can receive them as suggestions that you can review and then rapidly push to production. Or, you can have the settings auto-promoted, which is how many of our customers, including Ancestry, like to operate. As a cloud optimization platform, Opsani offers a hell of a lot more than just notifications.

Myth #2: Opsani is an agent or sidecar

When we talk about Opsani at conferences, we offer an architectural overview. It shows our SaaS cloud optimization platform on one end, and the client’s infrastructure (displaying their cloud hosting provider) on the other end. The view shows a loop of metrics being serviced into an APM tool that flows into our Servo, which sends an API call to their deployment tools to make those changes. We test and update the Servo through canary releases or within load-gen modes; that is, we use one of their instances to test the Servo and make the changes.

The misconception arises when people think that with an agent or sidecar, they’ll have to deploy the Servo in every single instance. Companies have hundreds and thousands of instances running; they think that “adding the Servo” will make their application or service clunky.

But: Opsani’s Servo doesn’t fall within each individual instance, nor does it fall within the code or the deployment tools. It conducts performance optimization within the infrastructure as a whole. It’s basically like traffic control; like how a policeman directs cars on the streets. Opsani is kind of a dispatch, saying: “Hey, I’m getting these metrics from the APM or monitoring tool. Deployment Tools, you should make these changes, right away.”Moreover, Opsani’s Servo is very lightweight. It’s akin to a T3 Nano, which is an extremely small container, and it occupies just one per infrastructure.

Myth #3: Integrating Opsani is a heavy lift

Whenever we talk about how easy our clients find the onboarding process to be, people often feel that it sounds too good to be true. The assumption is that there must be a lot of lift involved in integrating Opsani into a company’s tech stack.

Wrong. Opsani will require a light lift, in the beginning, to get things set up. This is unavoidable when we take everything into account – from the application parameters that revolve around your app or service, to the resource and runtime settings, to the front-end, back-end, and middleware. Opsani’s performance optimization approach loops in these elements, and getting that visibility and reach can’t be done by clicking your fingers. We need to consider a company’s specific needs, and how they should influence our overall setup.

Part of this light lift is the guardrails that we put in place for the Servo. These guardrails tell the Servo: “Hey, these are the knobs we’re going to tune, but we don’t want to go past these thresholds.” For example, no less than one core of memory.

But once everything is put in place and baked into the pipeline, you no longer have to worry about cloud optimization. It doesn’t matter what teams are playing in the box, or how much new code is being released – the Opsani cloud optimization platform will continue optimizing so that all the developers have to do is concentrate on writing code and releasing new products. After an initial light lift, you’re off to the races.

To see just how untrue these three myths are for yourself, request a demo.