How to Manage Requests and Limits for Kubernetes

[ARTICLE]

How to Manage Requests and Limits for Kubernetes

Managing requests and limits is a fundamental step for cluster performance and application optimization.  Kubernetes’ scheduler manages the complexity of determining the best placement based on the availability of resources on individual cluster nodes.  What this looks like will vary depending on the types of nodes available and the resources required by individual applications.  

Kubernetes will do its best to make sure your system remains up and running. This is a primary function. However, default settings do not guarantee that your system is either doing a great job of using available resources efficiently.  One should also not assume that default settings will not negatively impact application performance. One way to tune Kubernetes to address both of these issues is to set requests and limits.

What are Requests and Limits?

The default compute resources that Kubernetes manages are CPU and memory. Requests and limits can be set for both.  A request defines the least amount of either resource that an application needs and will determine if a Pod can or can not be scheduled on a given node. Note that requests and limits are applied to individual containers within a Pod, but the total of all requests in a given Pod are used in aggregate to determine placement on a node.

Limits set the amount of a resource that a process (container) can use if available on the node it is running on.  If the process exceeds that limit, how Kubernetes enforces the limit differs for each resource. For memory, the entire Pod may be terminated.  For CPU, the process’ access to CPU resources may be throttled. Choosing settings appropriate for your application is key to both making sure your app has enough resources to run efficiently and to make sure that Kubernetes can efficiently pack applications on appropriate nodes without wasting resources.

How Requests Work 

So along with assuring appropriate resource allocation to make sure the application can run correctly, a resource request for a container helps the Kubernetes scheduler decide on the appropriate node on which to place a Pod. Here is an example of what this might look like in your Podspec:

So in the example above, the Pod has requests set for a single application container (a Redis database) and could be scheduled on any node that has at least 1 GB of memory available and a half CPU unallocated. So on an empty node with 4 GB of memory and one CPU, Kubernetes could schedule two of these pods.

How Limits work 

Limits ensure that any running process does not use more than a certain share of the resources on a node.  What happens differs between memory and CPU resources. In the case of a container starting to exceed its CPU limit, the kubelet will start to throttle the process. Now although the application is still running, the problem is that application performance will be degraded as its access to CPU resources is being limited.

Exceeding memory limits will result in an out of memory (OOM) event. In this case, the entire pod will be terminated. It is worth noting that with a multi container Pod, an OOM event in just one of the containers will still cause the whole pod to be terminated.  Now Kubernetes will likely respawn the Pod, but if the process again hits its memory limits, it will again be terminated.  In this case, the end result is, again, degraded performance.

Setting Requests and Limits

Because no one wants to see their application performance being degraded by running up against resource limits, both resource requests and limits are frequently best-guessed or intentionally overprovisioned. Unfortunately, this can greatly result in excessive system costs as resources are reserved by the overprovisioned pods but not being used. Optimizing by taking the time to set up a monitoring process and validating actual CPU and memory will allow appropriate requests and limit values to be set. This will avoid the performance hit of setting limits too low and is one way to achieve much better resource use (bin packing).  This information can further inform the selection of nodes for your cluster to further tune application performance. 

For completeness, there is also a hardware side of the equation, as taking the time to determine appropriate CPU and memory requirements for your application and the expected overall system scale will also help you choose the appropriate infrastructure to build your cluster with. More often than not these days, the main constraint on node sizes and features is what your service provider offers, though for many public clouds you have a fairly sizable menu of options and the ability to tune your node instances for memory or CPU performance.

Kubernetes will do its best to bin pack efficiently and setting appropriate limits and requests along with selecting appropriate hardware for your application’s needs can result in improved application performance and cost savings.  The challenge is that the combination of a constantly changing cloud environment and a large number of options for request and limit settings and node configurations, the possible options quickly become overwhelming. For an AI/ML algorithm, however, this is a manageable task.  Opsani applies AI-driven Kubernetes automation to give you back the time otherwise spent toiling to evaluate and adjust system function and lets you get back to doing more interesting and meaningful work. Opsani seamlessly integrates with Kubernetes to automate the optimization of cluster workloads with benefits including increased productivity, more stable applications, more agile processes, and more. Contact Opsani to learn more about our technology and products that can further improve your Kubernetes performance.


Black Magic: What Does It Mean and Why is It So Important?

[VIDEO]

What is Black Magic and why is it so important for App performance?

What is black magic and key components of black magic that sway an application one way or another in terms of impressive performance gains? Everything that is related to tuning java applications is black magic because it is hard to do. There’s too many factors that matter and then too many parameters can be adjusted to fit these factors. Of course, it is not black magic because there is a good and predictable way into which you can do. Even having an understanding of the application of how the application works is not directly correlated to knowing how to tune. This is where machine learning solves the problems that humans make because it doesn’t really solve a problem because it doesn’t really need to know the internal workings of the application. ML is also persistent and will keep trying. It can also remember what is has tried and finding correlations between factors that may appear unrelated to humans. Especially in Java there is a lot of magic that happens in the virtual machine. There’s the just in time compilation. There is the whole heap management and the complexity of garbage collecting different objects with different lifecycles.

So, even things that kind of look like the same operation, then the same operation happens a second time, it may take a different code path, it may have a different performance characteristics. There’s a lot of complexity and there’s two ways to go and there’s two ways to go. One way is to go the white box approach and become extremely granular and the level of understanding of what works in tracing every single code line, and every functional color, understanding how they work and how to minimize it. The other approach is to say, you know, I am looking at this as a black box, then you can optimize and you have to define what are the important metrics. Is it the response time, requests per second and so on. For more insights, watch Tuning by Humans vs Machine Learning Part 1 and Part 2.

Request A Demo

How Opsani Delivers Value to Enterprise SaaS Platforms

[WEBINAR]

How Opsani Delivers Value to Enterprise SaaS Platforms

Our Director of Sales, Amanda Summers, will discuss how Opsani delivers value to enterprise SaaS platforms and why application tuning is an important part of a company’s overall cost optimization strategy. Come learn about how Opsani can dramatically cut your cloud costs, and enhance application performance all at the same time.

Request A Demo

What is the Difference between Docker Swarm and Kubernetes?

[ARTICLE]

What is the Difference between Kubernetes and Docker Swarm?

 Containerization, along with DevOps processes, has accelerated the ability to build, deploy, and scale applications on cloud systems. Containers have also been a boon to microservices based applications, where the overall application service may comprise two, three, or more smaller applications. The intentional independence of those API-coupled smaller services means each can be updated, scaled up or down, and even entirely replaced as needed. The speed, responsiveness, and flexibility of these systems also bring added complexity that is inefficient if managed by traditional manual IT processes.

Enter the Container Orchestration Engines(COE) like Kubernetes and Docker Swarm.  These are container management automation tools that handle the complexity of web-scale applications with ease.

What is Kubernetes?  

Kubernetes (also referred to as K8s) is a COE that was initially developed by Google based on systems they use to run containers at web-scale and then open-sourced. It can be deployed on almost any kind of infrastructure — anything from your laptop, a local data center, out to the scale of a public cloud.  It is possible to even have a Kubernetes cluster that spans across different infrastructure, such as a hybrid cloud with public and private resources. Kubernetes has a lot of container management “intelligence” built-in that ranges from being able to place containers on the appropriate nodes based on resource requirements, load balancing across applications, scaling applications up and down in response to load, restarting or replacing stalled and failed containers, and more.

What is Docker Swarm? 

Docker Swarm or just Swarm is also an open-source COE. Docker, the company that created the Docker container, also built Docker Swarm as the Docker-native orchestration solution.  One benefit for users of Docker containers, is the smooth integration this provides. For example, Docker Swarm uses the same command-line interface (CLI) as is used for Docker containers. If you are all-in on using Docker for your system, Docker assures backward compatibility with other Docker tools. It is highly scalable, extremely simple to deploy, provides container management features such as load balancing and autoscaling.

Kubernetes vs Docker Swarm 

Though both of the COEs are open-source, run Docker containers, and provide similar functionality there are a number of significant differences in how these tools operate. Below, we’ll look at some of the notable differences and consider the pros and cons of the differing approaches.

 

Installation 

Kubernetes: Installing Kubernetes requires some decisions about, for example, which networking solution to implement, and configuration, at least initially, must be manually defined. Information about infrastructure is also needed ahead of time and includes assigning roles, number of nodes, and node IP addresses. It is worth noting that most major cloud providers have hosted Kubernetes versions that remove much of the friction that building your own system involves.

Docker Swarm: Installing Docker Swarm is about as easy as installing any typical application with a package manager. Creating a cluster just requires you to deploy a node (server) and tell it to join the cluster.  New nodes can join with worker or manager roles, which provides flexibility in node management.

Application (Container) definition

Kubernetes: Applications are deployed as multi-container “Pods” in Kubernetes.  A pod minimally includes the application and a network service container. Multi-container applications can be deployed together in a pod.  Deployments and services provide abstractions that help to manage multiple Pod instances. 

Docker Swarm: Multiples (replicas) of single container applications are deployed and managed as “swarms” in Docker Swarm. Docker Compose can be used to install multi-container applications once those applications are defined with a YAML configuration file.

Container Setup

Kubernetes: Early in its development Kubernetes’ goal was to support a wide range of container types in addition to Docker. As a result, Kubernetes’ YAML, API, and client definitions differ from those used by Docker.  So in addition to running Docker containers, it is also possible to run CRI-O and Containerd runtime-compatible containers on Kubernetes.

Docker Swarm: Docker Swarm was built to run Docker containers and “Docker Ecosystem” tools.  It does not support other container runtimes. Also, it is worth noting that the Docker Swarm API much, but not all of Docker API operations. 

Networking

Kubernetes: Kubernetes integrates with a number of networking technologies with the open-source Calico and Flannel solutions being among the most popular. With flannel, containers are joined via a flat virtual network, which allows all pods to interact with one another with restrictions set by network policy. TLS security is possible but requires manual configuration.

Docker Swarm: Nodes are connected via a multi-host ingress network overlay that connects containers running on all cluster nodes. The Node joining a swarm cluster generates the overlay network for services across all hosts in a Swarm along with a host-only docker bridge network to service containers. It is possible to further configure inter-container networks manually. Authentication (TLS) between nodes is automatically configured.

Load Balancing

Kubernetes: Typical Kubernetes load balancing function is exposed via the concept of a Kubernetes Service that abstracts the function of Pods that comprise the Service. Configuring the service is a manual process that requires a service definition including assigning Pods and policies.

Docker Swarm: Swarm provides automated, built-in internal load balancing. All containers are on a common network that allows connections to containers from any node. Services can be assigned automatically or can run on ports specified by the user. It is possible for users to specify ports to be used for load balancing.

Scalability

Kubernetes: Kubernetes’ focus on reliably maintaining cluster state and the large, unified set of APIs slows down container scaling and deployments as compared with Swarm.

Docker Swarm: Docker Swarm deploys containers extremely fast, even on large clusters, as compared to Kubernetes.

High Availability

Kubernetes: Unless directed to do otherwise, Kubernetes distributes Pods among nodes to offer high availability. Kubernetes detects unhealthy or missing pods and ensures adherence to the desired configuration by appropriately deleting and redeploying Pods.

Docker Swarm: Docker Swarm also offers high availability when services are replicated across a recommended minimum of three Swarm nodes. The Swarm manager nodes in Docker Swarm are responsible for the entire cluster and handle the worker nodes’ resources. Swarm services are self-healing so that if a container or host goes down, Swarm will bring the service back to the desired state.

In a number of other aspects, functions like providing high availability by distributing applications across nodes, self-healing aspects, supporting container updates and rollbacks, and service discovery, specifics differ in detail but functionality is relatively equivalent. The major difference between the two platforms derives from Docker’s focus on ease of use and support specifically for the Docker ecosystem.  Kubernetes’ goal to be as multi-purpose as possible and the ability to select multiple options (e.g. container runtime, networking solution) are seen in an open and modular design that requires more upfront, manual work to get going.

One final consideration in selecting either tool is the trend in popularity, with Kubernetes, with a growing community and support of the CNCF increasingly being the COE of choice. Docker Swarm has not fared as well as, while also open source, was controlled by Docker and later purchased by Mirantis, and has neither seen the growth of strong community support or adoption, as can be seen by proxy in the graph below.

Once you have Kubernetes up and running, updates to configuration continues to be a manual process. Opsani can help with software that applies ML algorithms to provide continuous optimization for Kubernetes systems. What is challenging or impossible for a human, the Opsani AI handily finds and applies optimal configurations to the environment.  Further, Opsani continually refines its understanding of the optimum across time and through load variations. Opsani has been proven to increase performance and save money for businesses that use Kubernetes.

Contact Opsani to know more about how they can help you optimize your infrastructure and cut your costs with the power of AI and ML. You can also sign up for a free trial and experience how Opsani can take your business to higher heights.


Tuning by Humans vs Machine Learning Part 2

[VIDEO]

Tuning by Humans vs Machine Learning Part 2

When customers tune their applications manually, they only see improvements typically between 10-20%. This is also the result of months of work. You either need someone who has the skills to tune your application or you’re looking at additional months of a person simply trying to acquire these skills. Keep in mind applications are constantly changing, making it impossible to maintain. Watch the video below to learn more! Be sure to also check out part one, Tuning by Humans vs Tuning by Machine Learning.

Request A Demo

Introduction to Kubernetes: Starting with the Basics

[ARTICLE]

Introduction to K8s: The Basics

If you work with cloud anything, you will have at least heard the name Kubernetes or its abbreviation K8s. It is a system for automating deployment, scaling, and management of containerized applications that quickly rose to dominance after Google open-sourced it.  The original code grew out of similar tools used to run Google’s own systems.  Since its release, it has attracted a large following and has been taken under the wing of the Cloud Native Computing Foundation, which manages development and supports the community.

One of the key draws to adopting Kubernetes comes from the SRE mindset at Google to automate away “toil” or unnecessary manual tasks.  Kubernetes provides the ability to run containerized applications in production environments in a responsive manner.  Kubernetes can be configured to scale a system up or down in response to changes in load. It can smoothly roll out new applications and, if needed, roll back to a previous version.  It can gracefully handle the loss of application instances by automatically bringing new instances online. It automates service discovery and provides load balancing.

Kubernetes Concepts 

One challenge about Kubernetes is that there is a fair bit of terminology to learn to make sense of what the pieces are and what function those pieces provide.

Kubernetes Cluster 

A cluster encompasses the compute, network, and storage resources that Kubernetes, and the applications it will manage, have access to. Cluster networks are flat to support East-West communication between pods and support both IPV4/IPV6 addresses. Storage can be local, provided via a cloud service, or provided via a networked storage system (e.g. Gluster, Ceph, NFS,…)  The actual cluster size will depend on the available compute resources.  It is possible to spin up a Kubernetes cluster with one to a few nodes on a typical laptop.  For production systems, can be run on bare metal servers, on virtual machines, or even nested within containers.

Kubernetes Pods 

Pods are a container management concept that is truly unique to Kubernetes. A pod is a logical group of containers that are run together. The applications that are grouped in a pod provide the application, storage and a shared namespace. While Pods could run a single application container, the model supports the use of microservice architectures where the multi-container pod represents an instance of a complete application.

Deployments

A production system will most likely be running multiple Pods, called ReplicaSets, and a Kubernetes Deployment provides a declarative process to update ReplicaSets. Initially, this could mean deploying a ReplicaSet that launches the application pods for the first time.  This is handy enough, but when your system is already running, updating your Deployment specification will cause the Deployment Controller to compare the desired state with the current state and then gracefully update the system to the new state.  This could, incidentally, mean removing an existing Deployment.  The Controller also monitors the system and assures that the desired state is enforced.  If a Pod goes offline, the Controller will spin up another to make sure the correct number of pods remain in service.

Labels

The labels are key: value pairs used to tag objects and create resource groupings. Individual objects, such as Pods, each have a unique ID. Labels allow the creation of meaningful tags that then simplify group interactions, such as using a label selector to identify a set of objects.

Services

A Kubernetes Service is an abstraction that defines a logical set of Pods. It also provides an access policy. While Pods have their own IP addresses, a Service provides a single DNS name for a set of Pods and Kubernetes can load-balance across them. This setup allows a Service function to be targeted rather than specific application pods, which are ephemeral resources. For example, this allows a frontend application targeting, say a back end database, to call the service and be appropriately connected rather than needing a way to select a specific database Pod.

Kubelet

The Kubelet is the agent running on each node that handles registering the node, sharing the health status of the node, and watching the Kubernetes API for for scheduled creations and deletions of pods. It runs as a binary and works with a combination of configuration files and the etcd servers to handle clusters on each node.

The Kubelet is an agent that runs on each Kubernetes-managed node and is itself responsible for managing Pods on the node where it resides. It registers its node with the Kubernetes apiserver using either the hostname, a flag (used to override the hostname), or cloud-provider specific logic. The Kubelet further provides updates on node health and is responsible for managing Pod creation and deletion.  The Kubernetes API provides a YAML or JSON PodSpec to ensure that the desired Pods are running correctly and healthy.

As you can see, there is a fair bit of terminology to get familiar with before getting started working with Kubernetes. You can find greater details, and more terminology, at the Kubernetes homepage.  For more Kubernetes info, check out Kubernetes Best Practices & Cloud Cost Optimization.

At Opsani, we automate away the complexity of operating Kubernetes systems with machine learning algorithms that provide continuous optimization for Kubernetes. What is challenging or impossible for a human, the Opsani AI handily finds and applies optimal configurations to the environment.  Further, Opsani continually refines its understanding of the optimum across time and through load variations.

Contact Opsani to know more about how they can help you optimize your infrastructure and cut your costs with the power of AI and ML. You can also sign up for a free trial and experience how Opsani can take your business to greater heights.


When Did the Unholy Marriage of K8s and Java Begin?

[VIDEO]

Unholy Marriage of K8s and Java

When exactly did the unholy marriage K8s and Java happen in the first place? The marriage started when Kubernetes started becoming good enough to be used in enterprise environments. When Kubernetes started to grow in large enterprises, that’s when the need for Java in Kubernetes became natural. Learn more in our short video below! Make sure to also check out our last video Tuning by Humans vs Machine Learning.

Request A Demo

Is SaaS Optimization Valuable for Your Enterprise?

[ARTICLE]

 Is SaaS Optimization Valuable? 

Software-as-a-Service (SaaS)  has transformed the software industry. One cannot argue that SaaS has generally improved how software is used, accessed, managed, and paid for.  The cloud model of only paying for what you use has made access to previously expensive software more manageable for a much broader population of users.  At the same time, for the companies developing and maintaining software, this model has improved revenue by better limiting access to legitimate (i.e. paying) users while greatly increasing the user base. Check out SaaS vs Cloud- Is There Even a Difference? for a deeper understanding of the difference between SaaS and Cloud.

From an enterprise perspective, the downside of this change in the delivery model is SaaS sprawl.  Your CIO is typically responsible for and will try to control SaaS spending. The double-edged sword of cloud computing is the requirement of on-demand, self-service access to the cloud resource provided (i.e. the application in the case of SaaS). For your CIO, this ease of access and resulting sprawl is caused by a lack of visibility into how employees are interacting with SaaS. It can become challenging to determine license costs, having good insight into the renewal process, as well as, security and compliance considerations. As CIOs want to optimize spending based on what they believe is essential functionality, employees may not be able to access the SaaS they view as critical to maximizing their efficiency.

So how should you optimize your SaaS? 

To optimize your SaaS correctly, you first need to understand that there are two different types of  SaaS optimization. The first type focuses on enhancing application performance for employees, the user experience, for those with access to the application. The SaaS providers can monitor application performance to understand the relationship between their service level objectives (SLOs) and the service level agreements (SLAs) with their customers. 

The second type of SaaS optimization also looks at performance from the employee’s perspective and adds the intent to optimize the overall cost for the enterprise. This type of optimization aims at assuring optimal results from each license. While having a substantial overlap with the first SaaS optimization model, the differences in measuring optimization processes demand unique tools for each.

The Benefits of Applying Best Practices for SaaS Optimization 

While specifics will vary, optimization can be considered the delivery of highest productivity or value by an application along with the lowest possible spend to achieve that value.  Let’s look at several key target outcomes of implementing best practices to support application optimization:

  • Achieve Visibility: Implementing some form of metrics/monitoring service is a critical component of any SaaS optimization process. It can be challenging to solve issues within a SaaS application if there is no visibility into the specifics of its use and function. Lacking transparency of use can further result in security and compliance problems. While the particulars of metrics will vary by use case, keeping track of the number of licenses, specific user metrics about the frequency of use and feature use, and service level agreement (SLA) compliance are essential to consider.
  • Eliminate Redundancy – Eliminating applications with overlapping functionality reduces friction between applications and streamlines functionality. While it is worth taking time to initially evaluate alternatives before adoption, standardizing applications to eliminate redundancy enhances overall productivity by simplifying the on-boarding, management, and building proficiency within the enterprise.   
  • Plan for Expansion Costs: If you have application visibility, you can understand the actual value that a particular application provides to your enterprise. Knowing which apps are adding value and which are not allows you to appropriately scale application use, budget your costs, and predict future costs. 
  • Autonomous SaaS Ops: Many apps provide tiers with differing functionality that can provide value if scaled correctly.  The ability to automatically and granularly change the status of your license depending on different user needs will appropriately match needs with function (and cost).  Being able to define spending or license limits in conjunction with automated scaling ability will assure that app function meets your needs and does not unexpectedly result in cost overruns. 
  • Minimize Costs: It is common for enterprises to over-provision their licenses because the premier version must be the best solution, right?  Again, visibility is key to being able to understand actual use and appropriately scaling licenses up or down. Substantial cost savings can be realized by eliminating costly licenses for users that don’t demand deluxe features or even less costly licenses that are simply not being used.

The most effective way to reach SaaS optimization will require the necessary visibility (data) about actual application utilization, paired with autonomous operations.  As application options grow and it becomes more common to run interdependent SaaS apps, being able to automate the decision-making process is the most efficient way to discover and apply optimal functionality.

How to find the best SaaS Management Platform for you?

While we’ve listed several recommended practices to get the best value out of your applications, not all SaaS will be a good fit for your particular business. 

  • Overview: SaaS management platforms may provide some level of application visibility and integrations with, for example, enterprise financial systems. Understanding the depth and ease of integration with key business systems is in having a clear overview of SaaS function and costs. 
  • User Engagement data: Many SaaS platforms will provide a measure of user activity based on the user being logged in. This can artificially raise apparent use metrics and does not give you info about how users engage with the application. Ideally, a SaaS platform will provide data on actual activity and specific features being used.
  • Informed Renewal: A SaaS management platform should provide a clear understanding of your application licenses. This will allow you to take the next step in optimizing your application. Understanding how your enterprise is using the application over time, the specific suite of features that provide value, and the ability of the particular app license to meet your service level objectives (SLOs) is information that will best guide renewal decisions.
  • Autonomous operations: For a SaaS application, automation is the route to optimal performance and value. The ability to enable or delete applications automatically is the most efficient way to ensure the best fit of app availability and costs to business requirements. An ideal SaaS management platform will provide autonomous operations that can take use pattern metrics to match users with appropriate license types and the most cost-effective way to implement those licenses. 

A primary difficulty in achieving SaaS optimization is that many SaaS offerings do not come with an application management platform, or the management functionality only checks some of the essential features we’ve highlighted above. So what is the solution?

Continuous Optimization-as-a-Service

Opsani provides a secure SaaS offering that automates the process of optimizing performance and cost-effectiveness with artificial intelligence and machine learning, and that integrates with a wide range of application types.  If you are working with a cloud-enabled environment, whether you are running an IaaS, PaaS, or SaaS system, Opsani may be able to save you money and eliminate the manual labor involved in optimizing application performance. Opsani can:

  • Provide cost optimization
  • Tune your application to optimize performance
  • Improve application reliability 
  • Automatically adapt your application performance to handle changing load optimally
  • Keep costs within requirements
  • Integrate with popular metrics engines
  • Eliminate repetitive application optimization toil to allow your employees to focus on higher-value tasks.
  • Provide users a better experience at a lower cost

Learn how to enable continuous optimization for your applications at Opsani.com.


SaaS vs Cloud - Is There Even a Difference?

[ARTICLE]

SaaS vs Cloud – Is There Even a Difference?

Software-as-a-Service (SaaS) is the cloud service model that most anyone that interacts with digital services has experience with.  This service model is now so ubiquitous that most people are not even aware that they are using a cloud computing service at all.  Today’s bevy of social media services are all SaaS applications. Despite this, most of those services’ users, if asked, will not be able to provide a clear distinction between SaaS and cloud computing and will quite possibly consider that they are the same thing.

 Is There a Difference Between SaaS and Cloud Computing?

Part of the problem in understanding SaaS vs Cloud is that cloud computing has been both made to seem rocket science-level complicated and been dumbed down to the level of just being an on demand storage service (a SaaS, by the way).   The use of this storage-as-a-service SaaS as a default cloud example has proven to be an unfortunate choice because raw storage (i.e. hard drives of some flavor) are also part of another cloud service model known as Infrastructure-as-a-Service or IaaS.  The recent trendiness of the -aaS designation has also resulted in a proliferation of as-a-Service designations for a variety of what are really just SaaS services. For a further explanation of SaaS, check out What is Software as Service (SaaS) Anyway?

While definitions of cloud computing are legion, a widely accepted and official definition provided by US National Institute of Standards and Technology (NIST) distills down what makes a cloud into a system with five essential characteristics: measured service, on-demand, self-service access, resource pooling, rapid elasticity and broad network access. In an IaaS example this would be self-service, on-demand access to servers, network and storage components.  For SaaS, this would simply be access to application. For completeness, the third ‘service model’ is the (application development) Platform-as-a-Service or PaaS.  A PaaS (e.g. Heroku, OpenShift) provides an on-demand app development platform that allows developers to build and deploy applications without worrying about running the infrastructure. It fits in between the IaaS and SaaS service models in terms of complexity and flexibility.

Do You Need a SaaS Product?

At this point, it is hopefully clear that SaaS is just one of three cloud computing service models along with PaaS and IaaS. So, all SaaS are cloud services, but not all clouds are SaaS. Well then, do you need a SaaS, PaaS or IaaS?  The accelerating trend towards “digital transformation,” aka moving to the cloud, is evidence of the overall benefit of cloud services for businesses.  The primary case where a cloud system might not be the best option is if your business or your customers are in areas where network connectivity is limited or inconsistent. Aside from that, the drawbacks to adopting a cloud system are negligible and the next question is, does a SaaS offering make the most sense for my business?

One way to think about this is recognizing that you have several cloud options for achieving your business goal and the primary tradeoff is responsibility for and flexibility with the infrastructure used.  A car analogy can be used to explain this concept. If your business goal is to travel from point A to point B on a frequent basis, where does it make sense to invest your time, effort and money?   The graphic below illustrates some of the tradeoffs you will need to make

You can see that with leasing a vehicle (IaaS), the vendor manages the car (infrastructure) but you own most of the responsibility for operating the system along with getting to where you need to go.  With a rental car (PaaS), your operational responsibility diminishes and you are still responsible for getting where you need to go.  With a hired car (SaaS) the operational responsibility is effectively eliminated and your primary responsibility is defining the destination. If owning infrastructure does not provide value and you only care about the core functionality (just get me from A to B in our analogy) a SaaS is likely the best choice.

To come back to the world of software, if you suddenly had the need to store and share your massive photo collection with your many friends, the quickest, and likely most cost effective, way to do so would be with a SaaS.  Because SaaS applications are, by definition, on-demand and self-service, you can get access to several services that meet your needs very quickly.  While you could build your own cloud storage service with an IaaS or PaaS model, you then will need to accept that it will take time to build and you will be responsible for operational aspects of the service, like upgrades and bug fixes.  

As long as there is a SaaS out there that meets your needs, it will likely be the most cost effective and efficient solution. If you truly have a unique need that requires a custom solution, then it may be time to consider a PaaS or IaaS. Even then, it is worth considering that many SaaS providers are keen to expand the range of their service. In some cases, it may be possible to have your need added as a new feature by the SaaS provider.

Is SaaS the Optimal Cloud Solution?

Short of working in an environment where network connectivity is not robust, there are few cases where running an application in the cloud is a disadvantage.  We’ve already talked a bit about why SaaS might be a better option versus IaaS or PaaS, but should it really be the go-to solution for everyone?  The key business value proposition for all cloud services really boils down to paying only for what you need and this is true for SaaS, PaaS and IaaS.  And for most businesses, that is really the question.  

Even if you are running a PaaS or IaaS, it is quite possible that you will want to integrate a SaaS application that provides some particular function that is not core to what you are trying to develop.  Why spend the time and money to build and operate a Single Sign On (SSO) function when a SaaS that provides this can readily and securely be integrated into your system. Creating applications that leverage other apps’ functionality via their APIs is a great way to increase development velocity.  Cloud services also bring with them lowered cost (compared to building your own), eliminating the toil of updates, and scaling of functionality beyond what would be otherwise possible to purchase outright.  The flexibility, cost effectiveness, and ready ability to scale functionality with low operational overhead makes it quite understandable why SaaS is the largest and fastest growing cloud service model.

Opsani provides a continuous optimization SaaS that makes sure that your cloud system (IaaS, PaaS or SaaS) is providing the greatest value. Opsani takes the complexity of system limits and desired business outcomes, calculates the optimal outcome, and then returns the appropriate configuration changes. The end result of this automated process is reduced cost, improved application performance, and engineers that are freed up from toil to do interesting things for your business.  

Contact Opsani to know more about how they can help you optimize your infrastructure and cut your costs with the power of AI and ML. You can also sign up for a free trial and experience how Opsani can take your business to greater heights.


Tuning by Humans vs Machine Learning

[VIDEO]

Tuning by Humans vs Machine Learning Part 1

Opsani examined applications that were tuned manually versus tuned with machine learning. For true optimization, settings and parameters need to be optimized that don’t make sense on paper. Machine learning tries different configurations that don’t make sense to humans. To learn more about how machine learning determines the best settings and parameters, check out the <2 minute clip below! Be sure to also check out our last video Are There Any There Easy Wins When Optimizing the Cloud

Request A Demo