Software engineering is a discipline built on repeatable patterns and tooling. This is how we scale teams and ensure business continues, despite relentless change. Due to this constant change, our practices are limited by the amount of tools we have available that can sustainably support them. 

This is where cloud optimization comes in.

The reason cloud computing optimization has fallen off of the workflows is because we do not have tools that enable us to practice it easily and scalably in a microservices, cloud native world. Business advantages are built on being the fastest to market and responding to change. Commoditization and pay-as-you-go pricing has allowed us to back-load cost optimization in cloud computing and prioritize speed of delivery. Of course, no company wants to be wasteful, but when given the choice to run efficiently or deliver fast, speed wins everytime.  

Opsani has a different belief system.

At Opsani, we believe this is a false choice that businesses have to face. Particularly because we don’t have the tools necessary to deliver fast and efficiently as part of our product development process. Therefore, cloud cost monitoring and optimization is the next logical step in the evolution of how we build and deliver software, while protecting the velocity necessary to win in competitive markets. 

Let’s get down to the basics of cloud optimization

Cloud computing optimization, or cloud optimization, is the practice of identifying the most efficient configuration for operating an application through data driven exploration and verification for the available settings. These settings can range from the lowest level detail such as CPU architecture you are running on, to the most application specific imaginable such as what feature flags are enabled. In cloud cost monitoring and optimization, settings include things like instance types, resource allocations through orchestration systems such as Kubernetes, and API driven configuration of managed services your application depends on. 

Opsani has a unique method of cloud optimization.

Currently, the prevailing methods for performing optimization, if it’s even done at all, typically look at a very narrow subset of settings such as JVM garbage collection configurations, or cache eviction policies, or look purely at the application code through the lens of a profiler. This creates a myopic view of possibilities, and totally ignores the interactions between components and settings that may be impacting your cloud cost and performance in unexpected and unpredictable ways. However by leveraging machine learning and AI technologies, Opsani’s cloud computing optimization engine is able to systematically explore these multivariate configurations that are far too complex for humans to handle on their own. 

The larger the application is, the more components and settings there are to take in consideration when optimizing. Producing precise, statistically verifiable measurements of the cost and performance impacts of configuration may take minutes and even up to hours gathering data. Alone, these discrete steps of measuring and adjusting the application can take a chunk of time. Furthermore, by the time you are done identifying the optimal settings for a moment in time, the world has moved on and your developers have committed more code.

Clearly this is why it is so important to automate and integrate cost optimization in cloud computing into the workflow. The complexity and time required to perform cloud computing optimization on modern applications is giving rise to the emergence of cloud optimization.

Now it’s time to figure out where cloud optimization fits into your pipeline.

The ideal place to integrate cloud optimization into your pipeline depends upon the maturity model of your DevOps tooling, processes, and software delivery philosophy. With that being said, Opsani’s most sophisticated customers who have gone all in on continuous delivery are embracing cloud cost monitoring and optimization. They allow the Opsani engine to optimize a canary deployment automatically each time code is pushed to staging/production and then auto-promoting the settings cluster wide, once a new optimal has been identified. Whereas customers who are still reliant on train based delivery schedules or ad-hoc releases may want to perform optimization after the CI suite passes and produce an optimization profile to be applied in tandem with the next release. This is ideally done by generating infrastructure-as-code assets such as Terraform plans or Ansible playbooks.

Part of my team’s job is to help our customers consider cloud computing optimization in the context of the reality of their applications, tools, processes, and teams. We act as trusted advisors who can help identify the best way to get started with cloud computing optimization today and elevate your cloud optimization game tomorrow, next week, next quarter, and next year. Striking this balance between cutting-edge engineer excellence and pragmatically addressing real-world problems is something I personally find fascinating and rewarding.