kubernetes best practicesEveryone loves Kuberentes. The leading container orchestration system allows for the automation of all aspects of application deployment and management. DevOps teams would struggle without it! However, Kubernetes is a rich and complex piece of software. Kubernetes best practices are a must for any team that wants to get the most out of the platform.

Here are seven key K8s optimization tips that DevOps teams should add to their Kubernetes best practices list.    

  1. Use Autoscale

This is a simple concept, but it surprisingly goes unused in many deployments. The K8s horizontal pod autoscaler can use CPU utilization as well as other custom metrics to scale. This ensures that you can maintain the head room you need to maintain your SLOs.

  1. Optimize Pods with the Right Resources

Not all hardware is the same. You need to choose the correct machine type for the workload. There are a wide variety of machines with a wide variety of properties – some machines will have more RAM; some will be equipped with GPUs. Select with care. Matching the type of work with the type of compute unit will allow you to fully utilize the compute units.

  1. Down-Scale Container Limits to Match Resources

When a container requests higher limits than it actually uses, this introduces slack in the system. Slack directly generates slack cost, and a little bit here and a little bit there adds up. Reduce individual container limits to the actual utilization of those resources.

  1. Use Time-based Scaling

Some resources are less-used outside of business hours. Set a baseline that matches the natural flow of your business, and schedule scaling based on anticipated usage.

  1. Measure

Tips 1 through 4 require you to measure what is going on in your system. Use open source tools such as Prometheus and Grafana to give you visibility into the status of your system.

  1. Tune the Application

Kubernetes can keep failing applications up by restarting them. This behavior can mask code problems. Make sure the application is not crashing, and consider that some changes to the application or its runtime environment may allow you to use fewer resources.

  1. Negotiate

When you find a more optimal setting that you will run for a while, negotiate with your cloud service provider. Keep the contract short enough, so that when the nature of your application changes, you can move to a different compute unit type or size if you discover a more optimal setting.

To learn about onboarding Opsani to Kubernetes, head over here.