Focus on Velocity, not Speed, to Improve Your Cloud

One major benefit of cloud systems is the speed at which it is possible to deploy infrastructure to support application deployments. This cloud service model is known as Infrastructure-as-a-Service (IaaS) and gives users ready access to servers, storage, and networking capabilities. In an enterprise setting, there is value in having a service portal where users access a platform simply and quickly to provision a system for application deployment.

Using IaaS gives end users quick access to all the tools they need to launch the infrastructure they want, but the ease of deployment should not be confused with optimal configuration. As cloud systems become easier to deploy, often with just a few clicks and menu selections in a graphical user interface (GUI), it ironically becomes more important to understand the implication of these easy picks on the application being supported. 

Cloud computing GUIs provide training wheels with a focus on speed and reliable, but basic, functionality. To achieve velocity, the configuration of a dynamic infrastructure that changes with your app to achieve optimal performance and value generation requires a deeper understanding and a bit more work.

On the left side, the fast, easy, and reactive approach to infrastructure can get you up and running quickly, and meets expected performance needs, but tends not to be responsive to application needs.  The tendency to over-provision results in inefficient resource use.  On the right, a more application-centric approach takes the time when configuring the system to automate the process of matching infrastructure to resource needs is both more performant and more efficient application.

Velocity in public cloud

While you still need some technical knowledge for deploying a server with IaaS, the growing number of ‘one-click installs,’ are really about speed, not velocity. Public cloud is focused on speed and reliability. Providers want to deliver a smooth and easy process for users to spin up resources. They also don’t want an application to go down because of the infrastructure they have delivered. 

Speed is great to get up and running, especially when testing basic functionality, but it is not going to provide maximum efficiency for your application. Your workload may be running at a speed that is not correct or be vastly over-provisioned,  just because you took advantage of a speedy deployment path. Even if the initial infrastructure selections off of the UI menu were based on consideration of your application’s function for peak times and availability, with this deployment path, you will not be taking advantage of the more nuanced feedback systems that optimize long term performance.

Velocity in private cloud  

Private cloud systems have more variation in the speed they provide, but mature systems approach or exceed the ease of deployment and reliability of public clouds.  Even as a private cloud gives users on-demand access to infrastructure, the smaller menu of private IaaS deployment options provides an additional dampener to application velocity. 

 The probability of configuring an infrastructure that is mismatched with ideal application performance becomes greater if the truly optimal configuration is not even available.  While the private cloud system is generally more flexible than a public cloud and can be modified to provide truly custom and application-specific resources, the need to interact with IT services for this modification can hinder both speed and velocity until the modification is rolled out.


To optimize applications to provide the most business value, it is time to let go of a focus on infrastructure deployment speed and shift to the application-focus of velocity. The process has to be centered on the optimal deployment of apps to further specific enterprise objectives. For this to work, an application optimization needs to provide a service function that maximizes profit, rather than speed of rollout. Now that public and private clouds have made IaaS deployment quick and reliable, it is time to consider that modern application integrations make it possible to deploy in a way that is still speedy but focuses more on meeting or exceeding market performance standards and efficiently supporting variable workloads.

How can you shift from speed to velocity?

As cloud operations have moved past the early days where getting systems up and reliably running was the challenge, the focus on improving performance has grown. For velocity to be the focus of your application deployments, app infrastructure needs to be delivered with more attention to the changing needs of applications in production environments. This can be done through infrastructure that is optimized for scalability and mapped to reconfigure itself to meet or exceed the required performance level. 

The inefficiencies of an (only) speed-focused operational model will certainly result in both application cost and performance inefficiencies.  The growth of open-source tools and SaaS applications that support application performance (e.g. monitoring, scaling services) make shifting to a velocity focus ever easier. With a bit of planning, the ability to readily achieve application velocity is within reach.

Seamlessly integrating several components to improve application performance is at the heart of the microservices architectural model that a majority of modern cloud-native applications are built on. It is not unusual for a cloud application, even in a private cloud, to connect to a public cloud or SaaS application for some functionality. While some thought needs to be put into how components interface, all as-a-Service providers are aware of the need for ease of use and reliability.  The result is an application that utilizes multi-vendor functionality to run as a single operational unit that automates and simplifies upgrade and configuration tasks. 

Shifting from a speed-focused model that is infrastructure centered and embracing an application velocity model that puts app performance first is the path to optimal user satisfaction and business cost savings.  An added benefit of wholly adopting an application velocity first mindset is that it assumes an increased focus on process automation.  A system that monitors application performance and automatically adjusts infrastructure to meet application needs will, by default, simplify the toil of repetitive IT team tasks.  This allows IT administrators to prioritize the high-value task of improving and automating the overall infrastructure to support application performance rather than focusing on repetitive, reactive, and low-value infrastructure adjustment tasks. 

The explicit and primary goal of Opsani’s continuous optimization engine is to make sure that your cloud system is running at maximum velocity.  Opsani takes the complexity of system limits and desired business outcomes, calculates the optimal outcome, and then returns the appropriate configuration changes. The result of this automated process is reduced cost, improved application performance, and engineers that are freed up from toil to do interesting things for your business.  

Contact Opsani to know more about how they can help you optimize your infrastructure and cut your costs with the power of AI and ML. You can also sign up for a free trial and experience how Opsani can take your business to greater heights.