Digital Transformation: Should A Company Use AI?


Should A Company Use AI for a Digital Transformation?

In this clip, Jocelyn spoke at our MeetUp Artificial Intelligence and the Enterprise Stack. Jocelyn is a technology executive and investor. She shares her take on when the best time is for a company to use AI for a digital transformation. Tune in to the short clip below to learn more. For more Opsani knowledge, check out our last clip Does Your Company Have To Be Large to Take Advantage of AI?

Request A Demo

What Is Autonomous Workload Tuning And Is It For You?


What is Autonomous Workload Tuning?

Automation is increasingly being accepted and implemented in IT in general and by DevOps practitioners in particular. However, there remains a certain resistance to “automating yourself out of a job.” Yet in SRE (Systems Reliability Engineering) terms anything that could be automated. And it has not been is referred to as “toil.” Repetitive tasks are considered to be a low value (and probably low interest) job. The SRE goal is to get rid of toil through automation. This will allow systems engineers to focus on the valuable and interesting opportunities. 

Tuning a single server that is supporting an application, and this was often the job not so long ago, is certainly comprehensible. But now look at a cloud native, web-scale, microservices application. It runs across dozens of machines across multiple geographic regions and has ability to comprehend the entire system. Which becomes much more superficial. This is not to say to improve the overall performance through thoughtful infrastructure or application tuning is not worth attempting. It is possible to improve a system manually. But if there is a tool that would automate this and probably do the job more quickly and efficiently, the question remains: “Why do manual tuning?”  

Let’s say you do have a group of engineers that can do a better job with your current system. It is worth considering  they are unlikely to scale as efficiently as automation. Also  the actual value of that performance delta is likely not as great as putting talented engineers onto task with greater value.  Perhaps something like having them improve the automation tools performance.

Now in truth, automation will not tune a bad application into a great one. Thought around how the application will work, considering the infrastructure will be needed to support it. Also appropriately testing it (performance testing, unit and integration testing) still needs to happen. But these are the areas that can be  tuned by your talented engineers and will likely provide greater value, until they can automate those processes.

One area where “autonomous” workload tuning has been popular is in the area of tuning database workloads. Auto-admin tools that apply heuristic algorithms and other estimates for tuning decisions are common. Though they still often include the involvement of a human DBA.  However, even this level of ‘traditional’ database automation does not map to cloud-scale database needs.  The lack of the DBA role in many cloud database services remains a recognized gap. This results in reduced efficiency and performance. This is a known gap that is being actively pursued and Oracle, for example, announced its first autonomous database cloud in 2017 and has continued to add automated tuning and management features ever since. While most modern cloud databases can be expected to be self healing and self scaling.  In the Oracle example, adaptive performance tuning is delivered using machine learning algorithms to adjust performance without the need of a DBA.

In a similar manner, Opsani provides an AI-driven performance optimization engine, however, unlike the more specialized database example, Opsani focuses on tuning infrastructure configurations and integrates with any application. To learn more about improving the performance of your application workloads, visit and then try it out for yourself with a free trial.

What is DevOps?


What is DevOps?

It seems the term “DevOps” is self explanatory.  Which Means integrate development with operations. Yet ask several people what DevOps is and you will likely get many answers. Some view it as a part of a company’s culture. Others see it as a development methodology. Some might consider it a movement, or even something tp encompass all these things. Regardless of how one tries to define it, there are crosscutting features that make up what we call DevOps.  Let’s have a look.

  • What is DevOps?
  • How did DevOps Come to Be?
  • What Challenges led to DevOps?
  • How do Agile, SRE, and DevOps Relate to Each Other 
  • How Does a Team Implement DevOps?
  • Who Uses DevOps and Why?

What Is DevOps?

Patrick Debois came up with the term DevOps in 2009 by combining the “development” with “operations”. If we consider neither of these combined terms are a technology or standard, it becomes easier to see how DevOps as a culture or movement is a sensible way to understand what it is. The decision to meld development and operations teams is considered a cultural aspect of an organization. When looking at the increase in popularity and rate of adoption of DevOps approaches, it is appropriate to consider it a movement.  At the same time, there are a range of technologies that support DevOps. That combined with the roles of the developers and operators is the DevOps environment.  Any wonder why it is hard to pin down a single definition for DevOps? Still for the purpose of this article, let’s consider the following:

DevOps is a systems-oriented approach that emphasizes communication and collaboration between development and operations teams. It leverages agile/lean development practices and automation technologies to manage the lifecycle of an application and its environment (infrastructure).

While not explicitly stated, the goal of DevOps and a primary reason behind its growth in popularity, is that a DevOps culture/process shortens software development time to value. This is done by using short feedback loops to accelerate to cadence of releases.

How did DevOps Come to be?

While a search for “DevOps” manifesto will come up with company-specific documents or individual proposals. However such a thing does not, in fact, exist. Although Debois coined the term in 2009, there was no one great meeting of minds that created DevOps as a thing. It can be considered to have sprung in large part from Agile development (which does have a manifesto) and the Agile focus on identifying shortcoming of software and rapid iteration to improve or fix it. This “Dev” approach extends to encompass the entire application support system. In this regard, enterprise systems management plays a significant role in the infrastructure/operations side of DevOps. The influence of ESM can be seen in DevOps configuration management, monitoring and automation.

What Challenges led to DevOps?

Developing and running software are strangely at odds with each other. Software users want change. This consists of new features, improved features, fixed bugs and that leads to the need for continuously changing software. Software users also want reliable services. Such as fast access, no outages or crashes. Operations managers traditionally provide this reliability finding a solution that works and then avoid change to provide stability.

On the Dev side, there is an understanding that pushing out code quickly is the target. On the Ops side, avoiding change has been the traditional approach to deliver reliability. The division of concerns was frequently referred to as a wall that developers would toss their code over for the ops side to deal with. DevOps came about to find a way to solve this very real challenge in a harmonious way. By closely integrating all the stakeholders and opening communications, requirements for Dev and Ops could be addressed as part of the whole. The integrated team could now view the entire lifecycle as their responsibility. As well as work together to provide both rapid delivery and operational reliability with much greater efficiency.

Bring teams together that integrate development and operational members is often more about communication and collaboration than a specific technology set. Some characters of effective DevOps teams include:

  • Clear expectations and priorities and where these are derived from
  • A culture of collaboration, communication and sharing within and among teams
  • Embracing automation to reduce human error and free up time for high-value activities.
  • Granular testing to provide rapid feedback at the application and infrastructure level

How do Agile, SRE, and DevOps Relate to Each Other 

As you explore the world of DevOps you will often see terms such as Lean/Agile and SRE in the mix. Agile and Lean are approaches to development that emphasize fast feedback and short development cycles. They are very explicitly a cultural or mindset approach to development rather than a specific toolset though tools that support CI (Continuous Integration) processes are important. DevOps is the culture or mindset that drives the collaborative process and ensures cross-boundary communication.  System (or Site) Reliability Engineering (SRE) applies a software engineering methodology to operations through automation. This is where continuous monitoring and CD (Continuous Deployment) are key. Especially in cloud systems, this becomes an “Infrastructure-as-code” approach where infrastructure development very much begins to resemble software development.

How Does a Team Implement DevOps?

How a company or team’s DevOps implementation will look will be unique. However, there are still going to be commonalities. A successful DevOps culture will eventually include clear methods of collaboration, employ automation, implement CI/CD processes, employ systems monitoring, and rapid remediation.


This is key to a successful DevOps culture. Communication between operation, developers, product owners, executives, and anyone else that is a stakeholder is key. The lack of communication is a primary driver for the creation of DevOps. Many successful DevOps practitioners go as far as to implement “no blame” policies for understanding when things go wrong. This assures that communication remains free and issues can be both rapidly resolved and not repeated.


Automation is key to reduce human error (increasing the reliability that operations appreciates) and to free up time from repetitive, low-value tasks to allow more emphasis on providing value (the developmental velocity that the developer appreciates). 

Continuous Integration

Continuous integration (CI) is an essential part of Agile and servers both to accelerate workflows and enhance communication. When done correctly, CI encourages developers to merge the code that they are working on into a main “branch” frequently (ideally at least daily).  Doing so increases the likelihood of discovering conflicts and by keeping changes small, makes the resolution of such conflicts fast.

Continuous Testing (Unit and Integration)

Continuous testing used to be “over the wall” for developers, who would hand off code changes to a QA team. With CI/CD processes, there is an expectation that code is being merged into a master branch after being tested. This includes a unit test – does your code do what you think it should? – and an integration test – does your code play nice with the rest of the code? This is probably one of the key pieces of DevOps culture that gets ‘overlooked’ in the name of velocity, but can provide a huge road bump when code breaks and a large number of changes need to be reviewed. As part of a CI/CD process continuous testing makes sure that breaks are small and fixes are fast. 

Continuous Delivery and Deployment

The CD in CI/CD can refer to continuous delivery or continuous deployment. Both are the process of getting the software artifact built, tested and ready for deployment to production. In the case of delivery, this requires manual approval for release to production. A continuous deployment process is fully automated. Depending on where in the DevOps journey your team or company is, releases may occur infrequently (days,weeks, months) or as frequently as multiple times a day. Although the exact mechanics of CD vary, as described in the CI section, the model of small, frequent releases brings the benefit of small, rapidly repairable failures rather than large catastrophic ones.

Continuous Monitoring

Even with ample and appropriate unit and integration testing, the real world can bring unexpected surprises. Continuous monitoring of software performance and availability is key to provide stability in production. Though we are not going to get into the weeds of specific tools here, monitoring systems, integrated with CD tools can even automate remediation. In some cases this could be an automate ‘roll back’ to a previous release if the issue is application derived. Automation can also solve infrastructure related performance issues through server scaling and tuning.  

Who Uses DevOps and Why?

While some of the largest companies in tech (Google, Amazon, Netflix, Facebook, …) are well known for their leadership in using and pushing the boundaries of DevOps. Anyone in the software development business can implement it. DevOps, when viewed as the journey that it is, can be gradually rolled out. A team might adopt some Agile best practices, put communications expectations in place and gradually build out a fully automated CI/CD process. That process could then be shared with another team and eventually throughout the company.  Understand the benefits of DevOps is not just for the developers and operators. It is another cultural shift that needs to be recognized. DevOps provides faster time to value. Which benefits the bottom line that CFOs and investors care about. CFOs will further appreciate the granular testing and systems monitoring will assure efficient resource use. Customers will appreciate the faster cadence of software improvements and increased reliability. 

For the above reasons and more, DevOps improves company performance on multiple levels. In Puppet Labs’ “2016 State of DevOps Report”  a couple of key measurable benefits were called out:

Stability & Innovation :

“High-performing organizations spend 22 percent less time on unplanned work and rework. They are able to spend 29 percent more time on new work, such as new features or code.”

Performance Improvements & Stability :

“High-performing organizations decisively outperform their lower-performing peers. They deploy 200 times more frequently, with 2,555 times faster lead times, recover 24 times faster, and have three times lower change failure rates.”

Interestingly, the above points show t both the Devs, Ops and Business interests benefit. It is not just one or the other. While security is mentioned in the 2016 report, DevOps high performers spent 50 percent less time managing security issues, the 2019 Report focuses on it. 

Speedy & On Demand Deployment :

Integrating security deeply into the software delivery lifecycle makes teams more than twice as confident of their security posture.” “Firms at the highest level of security integration are able to deploy to production on demand at a significantly higher rate than firms at all other levels of integration — 61 percent are able to do so.”


Hopefully it has become clear that since the term DevOps was coined in 2009, it has become a win-win-win solution for all involved. It eliminates the traditional friction between Devs and Ops. It increases time to value as a benefit to both users and business managers and executives. 

This does not, however, come without hard work. As DevOps is put into practice by an increasing number of companies new best practices and anti-patterns will be discovered. As new tools develop, what the implementation side of DevOps looks like will also change.  The journey to DevOps success is transformational and will never end, only continue to get better. 

Does Your Company Have to Be Large To Take Advantage of AI?


Does Your Company Have To Be Large to Take Advantage of AI?

In this clip, Jocelyn spoke at our MeetUp Artificial Intelligence and the Enterprise Stack. Jocelyn is a technology executive and investor, shares her take on what types of companies can take advantage of AI. She also discusses what different levels of AI companies can leverage. Tune in to the short clip below to learn more. For more Opsani knowledge, check out our last clip What makes Opsani Different?

Request A Demo

AWS Fargate: What Are The Positives and Negatives?


AWS Fargate: Positives and Negatives

Fargate is a serverless compute engine for containers aka Container-as-a-Service or CaaS offering from AWS.  It works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate removes the need to manage infrastructure to run containerized applications. Also, its pay-by-the-second by application model can result in cost savings compared with running your own server instances. 

What is AWS Fargate?

If you have used either ECS or EKS to run your containerized applications, you are well aware that you are managing infrastructure, specifically clusters of EC2 instances, on which the ECS or Kubernetes service then manages running containers.  In the case of ECS, you are responsible for the EC2 instances you are running (you handle scaling, monitoring, patching, and security) in addition to the actual application concerns. Fargate looks a bit like a PaaS in that you no longer need to worry about that infrastructure stuff. While not providing as integrated a dev experience as a Heroku or OpenShift, the CaaS model lets you focus on developing the containerized app. It also lets Fargate handle running it.

How Does AWS Fargate Work? 

While AWS Fargate still runs on Elastic Container Service (ECS), you are not responsible for managing the ECS service. To run an application on Fargate you will need to: 

  • Have an AWS account
  • Containerize your application
  • Define your application’s CPU and memory requirements
  • Define any necessary IAM / networking policies 

Although we can assume Fargate is run using ECS on EC2 instances, but AWS does not provide much detail about the internal workings. This is perhaps the point. This is not your concern. You are able to manage your containers without creating a cluster of virtual machines. AWS manages the scaling of underlying resources and ensures 

AWS Fargate: Pluses and Minuses

Now you might think not needing to manage infrastructure and getting pay-by-the-second pricing makes going with Fargate a simple decision. However, there are also drawbacks. As I often caution eager adopters of new technology – know how your use case maps to the functionality of the tool you are considering.

Positives (+):

  • Less Complexity
  • Better Security
  • Lower Costs (Maybe)

Negatives (-):

  • Higher Costs (Maybe)
  • Less Customization
  • Limited Regions Available

Plus: Simplicity

Fargate is a Container as a Service (CaaS) technology.  It eliminates the need to manage servers. Thus the reason it is a “serverless” technology. While, yes, your containers are still running on servers, you don’t actually need to worry about configuring and maintaining the servers. AWS does that for you. You are still responsible for defining the infrastructure parameters your containers require (e.g. CPU, memory, storage, and networking). Then AWS will run your app for as long, or short, a time as you’d like.

Plus: Better Security

Complexity is problematic from a security perspective. Fargate removes the security burden of managing the complexity of ECS or EKS. Fargate runs each task for pod its own kernel providing your applications with their own isolated compute environment. Instead, you embed security within the container itself. There are also third party companies that offer additional security services for running applications in Fargate.

Plus… or Minus: Costs

On the surface, Fargate provides a tempting cost-saving opportunity when compared to ECS or EKS. It is important to remember you are still paying for a service. While not explicitly broken out in the cast of Fargate, that includes infrastructure management. It will come down to use case, size of infrastructure needs, and how good/efficient your ops team are. The important thing to remember is that the per-hour cost of Fargate is higher than ECS or EKS

Two areas you can expect to claim cost savings are:

  • Intermittent or highly variable workloads. Fargate only charges you, by the second, when your container workloads are running. The total time the VM is running does not matter.
  • Interruption tolerant processes can be run on Fargate Spot, which can provide even greater savings.
  • Fargate is good at time and event based (via CloudWatch) task scheduling so making sure that containers only run when needed is easy.

Long running events may be better served with EC2 instances. This is to take advantage of spot instances or reserved instances. You no longer pay per container, but along with this comes the responsibility of managing the infrastructure. You also need to make sure the containers you are running are efficiently packed onto those instances. 

Minus: Less Customizable

Do you have special requirements for compliance, governance, risk management, and the like? Fargate may not allow the finer-grained control that a more traditional model provides. 

Minus: Limited Regional Availability

If placing your application resources in a specific region or zone is important to realize the service is not available in all regions/zones. Also, far fewer EKS supporting regions support Fargate:

  • US East (N. Virginia)
  • US East (Ohio)
  • US West (Oregon)
  • Europe (Ireland)
  • Europe (Frankfurt)
  • Asia Pacific (Singapore)
  • Asia Pacific (Sydney)
  • Asia Pacific (Tokyo)

For the most up to date availability listing, check the Amazon Regional Services page.  


APM: What Is It and When Do You Need it?


What is APM and When Do You Need it?

Application Performance Management (APM), do you need it? In this post, we go over the basics of what it is and why you might want to use it. A good place to start is the possible confusion over the meaning of APM. I start this very paragraph by telling you that the “M” means management. Some will argue that is Application Performance Monitoring. There is a close tie between monitoring and management, but, as they are not identical. Let’s clear this up now.

What does APM mean? Monitoring versus management

Monitoring is how we get an understanding of how an application works. We do this through gathering data about system performance and storing or presenting the results. In some cases, alerts might be configured to notify about significant events. Management applies the results of that monitoring to improve the system. The management side of things could be to manually restart the system or change configurations. Power users automate processes like this so that a low memory alert, for example, could trigger the appropriate change to the system’s configuration to manage this.

What is APM?

Now we’ve cleared up the possible confusion over the acronym. However, there still remains a fair bit of confusion about the term itself. Gartner produced a functional AP Monitoring definition in their Information Technology Glossary:

“Application performance monitoring (APM) is a suite of monitoring software comprising digital experience monitoring (DEM), application discovery, tracing and diagnostics, and purpose-built artificial intelligence for IT operations.”

Opsani is an example that provides the AI part of the solution. It also provides integration of monitoring and configuration tools to provide application performance management.

Now that you have a sense of what APM might look like, let’s consider what the pieces of APM are:

Digital experience monitoring –

This may also be called end-user or real user experience monitoring. It provides information about any issues with errors or response times that the user will run into when trying to access the application on their device. 

Application discovery modeling

Nowadays, this is typically presented by a GUI. The GUI represents the components of an application or even interactions between microservice applications needed to provide the expected business value. To function efficiently, an APM tool should have the intelligence to discover and display not only the initial architecture, but also detect and update the representation. This is if there are changes in the environment. 

User-defined business transaction profiling

This puts the above architectural model to work. Here, a user starts a business transaction (BT). This is done by interacting with your website, as an example, by entering a search term for an item in your inventory catalog. This “search” may check that the user is appropriately authenticated and then query the database. All of which may be mediated via a message queue. Before finally displaying a result on a webpage. All of this process comprises a single BT and all of the component steps can be monitored.

Application component monitoring –

A deeper level of monitoring optimizes or troubleshoots your application performance. It digs into how the parts are actually functioning. How this is configured will vary by application and component. A server might be monitored to understand CPU or memory utilization, network services for latency, and a database for procedure executions. The fine-grained level of detail available through component monitoring can help identify ways to improve performance or help troubleshoot issues. Below you can see an example of server CPU and memory settings for an application’s front end and back end service that Opsani’s AI is optimizing.

Application Performance Analytics

This may warrant its own blog post. But for the purpose of APM this should provide actionable insights about your application’s performance. This will include the start baseline to compare against. Second, while some APM solutions just make the raw data a bit easier to view and understand, a good APM package should analyze (analytics, right?) the data and provide some form of problem and optimal function identification. 

Artificial intelligence for APM

 The inclusion by Gartner of AI in their APM definition surprised me at first. But in a DevOps world where we want to “Automate all the Things,” this actually makes a lot of sense.  Given the large numbers of variables and quantity of data that an APM system can comprise, and AI is a sensible choice to process and present monitoring data. The use of AI can further take you from AP monitoring to AP management as an AI can find optimal configurations to either fix or improve your application performance.  

Do you want an APM now?

The use of APM is a logical choice for anyone that is managing the underlying infrastructure that supports running applications. It helps not only with troubleshooting when things go wrong, but can also identify possible routes to improve application performance.  As a developer, APM can also be a boon to your efforts at troubleshooting and optimizing an application. If your company is one of the growing numbers of DevOps/SRE shops, then the similar troubleshooting and optimization functionality fits well into an iterative and collaborative workflow. It is hard to think of a good reason not to use an APM, if it is an option for you. The insight into application performance, how the application is actually working, allows quicker fixes and faster improvements to performance. All of which equals faster time to value for businesses and customers.  

What is an easy way to get started with APM?

If this introduction to APM has you excited about the idea and you want to try it, Opsani has a free trial of their AI-powered optimization engine for containerized applications on Kubernetes. Opsani integrates with a number of monitoring solutions to provide both an application optimization tool that, while providing application cost and performance monitoring, continually seeks improved configurations and can update the environment to achieve new optima. To give automated application performance management a spin, get your free trial here.

How is Opsani Different?


How is Opsani Different?

We are the only solution on the market that has to ability to autonomously tune applications at scale, either for a single application or across the entire service delivery platform. We can do this simultaneously and continuously. Another differentiator is we can auto discover the workloads as they are coming on board and then be automatically able to deploy the SLOs to those applications and start the automation of that tuning process. Tune in to learn more! Also, check out our last video What are Opsani’s On-boarding Requirements!

Request A Demo

Cloud Cost Monitoring is Key To Kubernetes Resource Management


Cloud Cost Monitoring is Key To Effective Kubernetes Resource Management

If you are running Kubernetes clusters, then you need to have cloud cost monitoring set up to fully maximize your Kubernetes resources, optimize your platform, and drive down expenses.

To truly appreciate the complexity of managing Kubernetes resources and costs, you just have to take a look at the different variables that impact the cost. This list includes recovering abandoned resources, optimizing instance sizes, choosing the right platform (EKS vs GKE vs others), and more.

Visibility with Opsani

Before you can initiate any optimization effort, your first priority is to gain better visibility into the prevailing usage and costs of resources. Opsani has the technology to automatically help you achieve visibility into your Kubernetes resource usage and spending.

With Opsani, you can get a dashboard that allows for real-time tracking and reporting. This makes it easy for your organization to closely follow the costs.

Opsani Cloud Cost Monitoring Dashboard Overview

Opsani’s cloud cost monitoring dashboard gives you a clear and comprehensive picture of your application performance and spending on Kubernetes resources. Opsani’s custom cloud cost monitoring dashboards are designed and built to seamlessly compatible with both Google Kubernetes Engine (GKE) and Amazon Elastic Container Service for Kubernetes (EKS) clusters. 

Cluster-level metrics enable users to pinpoint high-level cost trends and follow spend across production versus development clusters. Metrics at the node level help you see and compare hardware costs, which is quite useful if you are running node pools using various instance types. Lastly, namespace metrics aid with the comparison and allocation of costs across disparate departments and/or applications.


The screenshot above shows metrics that DevOps and SRE (Site Reliability Engineering) teams deal with on a regular basis. The dashboard provides you with not just the visibility, but also the staging ground where you can find opportunities for cost optimization. Every piece of information delivers insights that you and your DevOps/infrastructure teams can use to delve deep into workloads, traffic patterns, resource constraints, and other factors that impact your cluster costs. Optimization options in this situation range from vertical pod autoscaling to moving a section of compute to preemptible spot instances.

All metrics and graphs in your dashboard are crucial to managing cluster resources and optimizing cloud costs. With our guidance and technology, you have everything you need to cost dashboards configured and running.


Prior to any cloud cost monitoring and cost optimization endeavor, there are three things you need to take care of first. One, you will require a Kubernetes cluster. Two, you need to configure the kubectl command-line tool so it can communicate with your cluster. And three, you need to install Git.

Know Kubernetes First and Foremost

Opsani can help you automatically unleash the full potential of your Kubernetes platform while keeping your costs to a manageable level. But before you perform any cloud cost monitoring and optimization, it is essential that you have a deep and solid understanding of what Kubernetes is all about.

There are many optimization actions that you can perform to reduce Kubernetes costs. But if you don’t have sufficient Kubernetes knowledge, discovering the best way to optimize your Kubernetes clusters manually to bring down spend can be an extensive exercise.  

For example, knowing when and if to tune your infrastructure with the cluster autoscaler or focus on tuning the application with the Horizontal Pod Autoscaler (HPA) or the Vertical Pod Autoscaler (VPA) already gives a complexity of options to choose from.  And, as there are others, the number of possible parameters to consider quickly grows to a point where improving efficiency and decreasing cost is quite possible, but knowing that your configuration is truly optimal is hard. The figure below shows various combinations of CPU and memory settings for a two-tier application tested over a five hour period as Opsani’s AI continues to hone in on an optimal configuration. 

Opsani leverages ML algorithms to provide continuous optimization for Kubernetes. What is challenging or impossible for a human, the Opsani AI handily finds and applies optimal configurations to the environment.  Further, Opsani continually refines its understanding of the optimum across time and through load variations.

Contact Opsani to know more about how they can help you optimize your infrastructure and cut your costs with the power of AI and ML. You can also sign up for a free trial and experience how Opsani can take your business to higher heights.

Artificial Intelligence and the Enterprise Stack


Artificial Intelligence and the Enterprise Stack

In case you were unable to attend, this month we are streaming Jocelyn Goldfein’s, Managing Director at Zetta Venture, presentation at our recent MeetUp. She discussed how the pace of change has been crazy for enterprises and how they now have to turn on a dime. How can they do this? By looking into AI because it can help solve problems to keep up with this rapidly evolving industry. Tune into last month’s webinar How Opsani Delivers Value to Enterprise SaaS Platforms to learn more about Opsani!

Request A Demo

CI/CD: What Does It Mean And When To use It?


What is CI/CD? What it means and when to use it.

You may have heard the stories of companies that have abandoned the ‘release cycle’ and push code to production multiple times a day.  This might be a new feature, a small update, or a bug fix. Companies like Google, Netflix and Amazon pushing code to production hundreds if not thousands of times a day.  While the cultural processes of DevOps and SRE (Site Reliability Engineering) play into why this is possible, Continuous Integration and Continuous Deployment (CI/CD) are what are driving the very possibility of rapidly and continuously improving code. 

If you’ve spent any time in the world of software development then you have certainly come across CI/CD.  While the CI, continuous integration, aspect is fairly well understood by software developers as a part of the development process, CD is more mysterious. This may be because it crosses over into the realm of “operations.” The fact that CD may refer to continuous delivery or continuous deployment also adds some confusion.  The image below provides an overview of a typical CI/CD workflow and you can see where the Dev-centric CI process hands off to the Ops-centric CD process.

Continuous integration 

Focused on automating the building of software with testing built into the process, CI is primarily the developer side of CI/CD equation. In the graphic above the flow is to create some initial code (or modify the existing, stored code), run tests on the code, and if the tests pass, update the codebase. This can cycle through multiple times. The stored code – aka “artifact” – could be manually uploaded somewhere to run, but in a world where automation is increasingly the goal, such as in the world of DevOps and SRE, both CD and CD accelerate the deployment process. There are many tools that making support setting up a CI pipeline. Jenkins, Travis, GitLab, CircleCI, and Azure Pipelines are examples.

Continuous Delivery  vs Continuous Deployment

Although these two terms are different, their differences really come at the end of the process, so we’ll come back to that in a minute. The overall CD process is much like the CI process, but now the idea is that testing is going on in the environment that includes any other interactions that will be needed for the final presentation of the code. Especially in cloud systems where API-driven automation the process looks much like the CI process. If all tests pass, the new code is assigned a version and….  Now we need to consider the difference between CD and … CD. Continuous delivery simply means that there is a manual step – a final check- that will release the code into the wild.  Continuous deployment automates this last step with the assumption (recognition) that if the code passes the CD stage tests it is good to go and release into the wild. Like CI, there are many solutions to help you get your CD on – Jenkins, GitLab and Azure Pipelines will be familiar from the CI service examples. Netflix’s open source Spinnaker is also growing in popularity.

When and Why Does CI/CD work?

It is possible to build a pipeline like the one in the diagram and cause a critical failure to occur. The reason that so many companies are confident that CI/CD is the way to go, even in the face of the potential for such failure is that those companies’ DevOps/SRE teams are doing things correctly.

  1. Push small code changes, frequently. This goes for CI and CD.  Version control (and branching dev processes) is certainly going to be part of how code is developed, but those changes should be small.  Many will recommend that whatever you are changing should be ready to push to the master branch at the end of the day. This makes sure that if that change does break something, it should be easy to find and fix.  It also means that what is being pushed (and released) is relevant and you don’t spend weeks on code that suddenly becomes irrelevant.
  2. Build tests. Let me say it again – build tests. Again, a CI and CD principle. Testing the code that is being pushed against automated builds is probably the one thing that gets overlooked in the rush to build something shiny. Not having tests comes with a barrel of regret when it is time to go back and manage when something deep in the code breaks far into the development process. Although strange things will happen, building tests to make sure that the new code behaves as advertised is a first principle of CI processes.
  3. Use both unit and integration tests. More popular on the CI side, but has some relevance to the CD side, especially if you follow the Infrastructure-as-Code model.  You are building tests for your code? They are passing? Good. Now you need to make sure to both build tests that make sure your fancy new feature does what it should, the unit test, and that it plays nice with the rest of the code base, integration test.  
  4. Fix the build before pushing new code. Applies to CI and CD processes. Hopefully this means that tests are failing rather than an actual critical system failure, but the idea here is that you don’t want to pile stuff that could or should work on a system that is broken. This could mean rolling back to a previous version in production, but the better principle is to version forward with a quick fix.  This is the point where you see the value of principles 1, 2, and 3 (you were following these, right?) as small changes should equal small, quick fixes.
  5. Automate all the things. Automation, with testing, will produce systems that are reliably reproducible and avoid the toil of repeating processes manually and potentially adding human error into the system each time. Although this does not imply a static system (see the next point) it does mean that when the code is pushed and passes its tests in the CI system, a functioning and tested CD system should push that code straight through to production with confidence.
  6. Continuously improve the process. Our code lives in a constantly changing environment and as business, security, infrastructure, and other demands change, your system is likely to need to respond.  Because CI/CD is a Dev & Ops process, there are always likely to be places where the process can be improved, simplified and automated further. Having communication across concerns, a core DevOps principle will certainly shine the light on where opportunities for improvement lie. 


Hopefully all this sounds like the way that applications should be built and deployed. This is increasingly a consensus view.  However, while the processes can be laid out and there are multiple tools to support CI and CD processes, it is important to remember that there is a mindset/cultural shift that is required. Going from a waterfall, big release every six or 12 months to pushing code to production multiple times every day does not happen overnight. It is not something that really works if it is partly-adopted.  Even within a single team, having a single engineer not following the necessary testing protocols can derail things.  As when considering adopting any new technology, it is good to start small, start building teams that get the process down, validate how things work for their specific situation and can share their knowledge.  From there it can expand to cover the rest of the company. The CI/CD journey is not one that ends, but rather one that improves, continuously. Contact Opsani to learn more about how our technology and products