Docker allows you to run multiple applications in a single machine or a cluster of machines. Most organizations run a mix of virtual machines and containers, and they have their logging and monitoring stack configured to support virtual machines.
Most teams struggle to make Docker logging behave the way virtual machine logging works. So, most teams will send logs to the host filesystem, and the log analytics solution then consumes the data from there. This is not ideal, and you should avoid making this mistake. It might work if your container is static, but it becomes an issue if you have a cluster of servers, each running Docker, and you can schedule your container in any virtual machine you like.
So, treating a container as an application running on a virtual machine is a mistake from a logging point of view. Instead, you should visualize the container as an entity – just like a virtual machine. It would be best if you never associated containers with a virtual machine.
One solution is to use the logging driver to forward the logs to a log analytics solution directly. But then, the logging becomes heavily dependent on the availability of the log analytics solution. So, it might not be the best thing to do. People faced issues when their services running on Docker went down because the log analytics solution was unavailable or there were network issues.
Well, the best way to approach this problem is to use JSON files to store the logs temporarily in your virtual machine and use another container to push the logs to your chosen log analytics solution the old-fashioned way. That way, you decouple from the dependency on an external service to run your application.
You can use the logging driver to export logs directly to your log analytics solution within the log forwarder container. There are many logging drivers available that support many log targets. Always mark the logs in such a way that the containers appear as their own entities. This will disassociate containers from virtual machines, and you can then make the best use of a distributed container-based architecture.
So far, we’ve looked at the logging aspects of containers, but one of the essential elements of a DevOps engineer’s role is monitoring. We’ll have a look at this in the next section.
Docker monitoring with Prometheus
Monitoring Docker nodes and containers is an essential part of managing Docker. There are various tools available for monitoring Docker. While you can use traditional tools such as Nagios, Prometheus is gaining ground in cloud-native monitoring because of its simplicity and pluggable architecture.
Prometheus is a free, open source monitoring tool that provides a dimensional data model, efficient and straightforward querying using the Prometheus query language (PromQL), efficienttime series databases, and modern alerting capabilities.
It has several exporters available for exporting data from various sources and supports both virtual machines and containers. Before we delve into the details, let’s look at some of the challenges with container monitoring.