Writing software is hard. Balancing the demands of shipping with assuring quality presents challenges. Also, you must have an eye on keeping code readable and maintainable.
It’s not easy.
Beyond mere development, maintaining a running application is still harder. Serving users requires responding to problems in a timely fashion. In order to respond to problems, you need to know they exist. In short: writing software is hard; running software is often harder.
And that’s why we monitor.
When your systems exhibit behavior requiring remediation, you need to know about it. Monitoring is the lifeblood of any organization that cares about serving users consistently. In times past, monitoring could be handed off to an operations group—with manual effort and rudimentary tools—who watched logs on a single server or a small number of them. We don’t live in that world anymore, though.
Fast forward to today. In modern organizations, cloud-native deployments, serverless platforms, and container orchestration engines enable an explosion in the types and number of system components that can create problems. This enables great experiences for users and technical professionals but makes it harder than ever to monitor effectively.
When using Kubernetes, monitoring requires forethought, effort, and great tools to make sure you have visibility into what is happening in your systems. Alerting and self-healing when problems surface are paramount to your success.
In the sections that follow, we’ll explore the options available for monitoring containerized workloads deployed via Kubernetes and the decisions you’ll have to make.
The first step in understanding what’s happening in your systems is to expose metrics and logs. Visibility is necessary but not sufficient for monitoring. Still, it’s a start. Without visibility, monitoring won’t be possible.
The starting point for Kubernetes visibility is the command-line tool you already use to interact with your clusters. Kubectl is the convenient tool you use from your workstation to see and manage the status of your containers, pods, deployments, services, and all the resources you have in Kubernetes. It’s your first and best friend.
Kubectl lets you issue commands to the REST API on your cluster. A kubectl command looks like this:
kubectl [command] [TYPE] [NAME] [flags]
An example of using kubectl for visibility is listing the deployments in your cluster.
kubectl get deployments
This command will show you deployments, how many pods are expected for each deployment, how many are ready and available, and how long the deployment has been active.
The commands you can issue with kubectl are rich and numerous. This is only a small beginning of many ways you can query your cluster to see what is happening. Refer to the Kubernetes documentation for more information on how to use kubectl.
Another way to use kubectl to go deeper into what is happening inside a given container in a given pod is to use the logs command. It takes the form:
kubectl logs pod-name container-name
Like the tail command in Linux/Unix, you can use -f (follow) with kubectl logs to stream log output. This is a familiar way to watch events in progress for developers and operations professionals. It’s useful for troubleshooting, but is often too narrow and focused on single pods and containers to be useful for a general monitoring strategy.
It’s nice that you can use kubectl to check the status of your Kubernetes resources. There are also other options for visibility. Dashboard is a web user interface for your Kubernetes cluster. It’s not deployed automatically with the setup of a Kubernetes cluster, but it’s easy to add. You can do a lot with Dashboard that you can do with kubectl. It doesn’t replace the command-line user interface, especially for automated deployment and management, but it gives you another option with nice visualization and accessibility.
Generally, the abilities of Dashboard to mutate the state of your Kubernetes resources are best left unused. You want to have immutable services and deployments created, updated, and managed by automated deployment pipelines. That Dashboard lets you change things doesn’t mean you should use it that way.
Still, Dashboard is useful. When you want to see a visual representation of the state of your cluster, nodes, storage, and workloads, it’s a great resource with broad views of the cluster as a whole and the ability to drill into greater details on particular resources.
Among the reasons to choose Kubernetes as the platform on which to run your containerized workloads is the way it takes the system state you desire and makes it a reality. You use manifests to describe how you want your system to run and Kubernetes takes care of bringing that state into being.
To know that your applications are healthy and running in the containers Kubernetes creates, though, you need to offer some assistance. Kubernetes monitors the health of containers automatically. When you specify the containers you want running in your pods, you can specify probes on them.
Probes are tests of your containers. They come in three types: liveness, readiness, and startup. Startup probes are not interesting in most cases and it’s liveness and readiness probes you need to know about. You can use probes to issue commands inside your container or HTTP requests (or other network connection operations) to verify that your container is running and ready to accept work.
This is the base of monitoring—the monitoring Kubernetes does for you to make sure you have healthy containers working on your behalf.
Stackify Retrace comes to your aid in monitoring your workloads. It’ll work in most environments. Kubernetes is no exception. By following the instructions to install Retrace into your cluster, you’ll get all the visibility you need into what is going on inside your Kubernetes-deployed systems. Further, you’ll have one place to go to look at metrics, logs, and application performance monitoring indicators across the whole cluster.
You have many options and there is a wide variety of tools you can use to monitor your systems. Retrace stands out in that it gives you a convenient dashboard, machine characteristics for your nodes and containers, structured logging, and performance tracing. Given that it thrives on bringing your monitoring information from multiple sources into one dashboard, it fits with Kubernetes like a glove.
Most organizations will run Kubernetes via one of the large public cloud computing providers. Each of these giants provides a wealth of tools for monitoring the nodes and cluster itself, as well as application performance and logging. You’ll want to get familiar with the options available from your cloud provider in addition to using Retrace with your cluster.
You must know what is happening in the systems you deploy. When problems arise, they need a fix and the fix needs to come before users notice there’s a problem at all.
Monitoring is at the heart of the confident operations of software systems. Monitoring is as important when using Kubernetes as in any other type of platform. Despite the equal importance of monitoring in all deployment models, Kubernetes clusters present some unique challenges due to the dynamic nature of resources that come and go.
Remember that there are useful tools that bridge the gap and make monitoring joyful when using Kubernetes. Make sure you are aware of and make use of:
Monitoring is more than tools, though. It’s a mindset as well. Good monitoring requires that you care about what happens for your users and they get the best experience possible. Armed with your desire to build great systems and great tools like Kubernetes, you’re ready to go and create value.
If you would like to be a guest contributor to the Stackify blog please reach out to stackify@stackify.com