There’s no doubt that containers help speed up the delivery of your application. Companies have begun adopting them in the hopes of rapidly delivering more features & value to the customers of their product or service. Within that adoption is the acceptance of new processes, many call the DevOps pipeline, which should cover most of the checks and balances (CI/CD, testing, etc.) that exist with legacy systems. Without this, it becomes difficult to maintain the quality of your application along with the speed to delivery.

An example of a validation that should exist is one that checks when you are deploying or updating an application. It is useful for a deployment system to examine, query and assess the application, its services, and ultimately the container instances which comprise the application to determine if…

  • it is running (according to the container runtime engine)
  • it is alive (responsive)
  • it exposes the expected API or service
  • it is healthy (passes a health check)
  • it performs adequately (passes a performance check)

Limited Probes = Deployment Failure, Downtime, Wrong Pager

Existing systems rely on mechanisms like container built in health checks or management interface ping to execute a quick probe from inside or outside the container and see if it is truly ready to go. These internal and external health checks are a staple in monitoring containers as they are being deployed. However, because they aren’t checking as if they were a service dependent on your service, they may lead you on a wild goose chase to find the actual problem when one does occur.

Docker supports an internal health check, which runs a command but may or may not do an http/tcp check. Kubernetes and Mesos support basic external http and tcp checks but they use a management interface rather than the service port.

These checks though sufficient for simple situations can create false negatives or false positives leading teams struggling to find the actual problems in their apps. Spending time troubleshooting on the wrong component is wasteful and your team can’t afford downtime.

Valid Test of Functionality: Containerized Application Probes

The only true test of a system is to attempt to act as a client of that service – to test the functionality of the individual service and everything connected to it like a true API call. Containerized Application Probes or CAP for short, can do just that; CAP provides faithful reproduction of the position of the service’s client ensuring a true test on the service interface of the container.

Because CAP runs in its own container, one can be packaged along with a specific service client library, run-time configuration, and even security controls for the service. It is also configured to run on the target service’s network and connects to the exposed service interface. This allows it to check the target service’s operation exactly as it would be seen by service’s client and does not require to change the target service to add checks or to listen to additional management ports.

To re-cap (pun intended), there is no other way to fully validate a container.

CAP provides:

  1. True functional tests of the service
  2. Client’s point-of-view  probing and validation of your application and services
  3. Service interface probing without modifying service’s code or configuration
  4. All working with Docker containers and soon Kubernetes

CAP provides more trusted validation of the deployment and better precision in identifying the failing service if there is one. In this video, Opsani Software Engineer, Stephen Quintero shows us an example of the false positive using Docker Healthcheck to test a WordPress and MySQL deployment.

If you’re going to adopt containers to speed up the delivery of features, you need to consider more than the standard health checks in deployment to make sure your new-found speed doesn’t reduce the quality of your service.