Unpacking Containers: CloudWatch’s Role in Docker Monitoring

Unpacking Containers: CloudWatch’s Role in Docker Monitoring

Introduction

Docker has revolutionized the way we run applications. It has enabled lightweight, portable, and scalable environments. Docker enables one to separate applications from infrastructure so as to ensure quick delivery of software.

What is Docker?

Docker provides the ability to package and run an application in a loosely isolated environment called a container. These containers package your application's code, libraries, and dependencies into a single, portable unit.

Why Monitor Docker Logs?

Running containers in production isn’t just about deploying them; monitoring logs is critical for:

🛠️ Debugging issues

📊 Gaining real-time visibility into application behavior.

🔒 Enhancing security.

In this blog, we'll explore how to manage monitor Docker logs using CloudWatch, with focus on using the CloudWatch Agent and Docker's 'awslogs' log driver.

Monitoring Logs with CloudWatch

Applications aren’t "set and forget." Although Docker streamlines deployment, monitoring logs is essential. Without effective monitoring, you could encounter issues such as undetected performance bottlenecks and hard-to-trace bugs ⚠️

Why CloudWatch?

Container logs, being ephemeral don’t persists in case the container crashes or is removed. This is where CloudWatch plays a crucial role.

CloudWatch provides centralized log management, with logs from multiple containers being aggregated at a single place. With logs in CloudWatch, one can set up alerts for specific error messages, filter, search and analyze log data. CloudWatch seamlessly integrates with other AWS services, like AWS Lambda, SNS, SES and what not!

Prerequisites

  • Access to an EC2 Instance: Ensure an active EC2 instance is running

  • Docker Installed: Docker should be installed and properly configured on the EC2 instance.

  • IAM Role: Instance should have an IAM role attached with the necessary permissions to write logs to CloudWatch

Monitoring Logs

Using Amazon CloudWatch Agent

CloudWatch Agent is used to monitor system-level metrics and logs from the EC2 instance or from Docker containers.

Setting up Cloud Watch Agent

  1. On your EC2 instance, install the CloudWatch agent

     sudo yum install amazon-cloudwatch-agent -y
    
  2. Configure the agent using the wizard

     sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
    

    Or simply create a ‘config.json’ file with the below mentioned configurations

     nano /opt/aws/amazon-cloudwatch-agent/bin/config.json  #Opens the file to enter configurations
    
     {
       "logs": {
         "logs_collected": {
           "files": {
             "collect_list": [
               {
                 "file_path": "/var/lib/docker/containers/*/*.log", 
                 "log_group_name": "DockerLogsEC2",                
                 "log_stream_name": "{instance_id}",
                 "timezone": "Local"
               }
             ]
           }
         }
       }
     }
    
  3. Now, we apply the configuration to the CloudWatch agent and start it. To do so, hit the following commands

     sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl \
       -a fetch-config \
       -m ec2 \
       -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json \
       -s
    
  4. CloudWatch agent will be successfully started.

  5. Now we can test it by running a container and checking for its logs in CloudWatch

     docker run hello-world   # Simple hello-world container
    
  6. The container runs successfully and its logs can be seen in CloudWatch

This approach has certain drawbacks; Logs from all containers are written to the same log stream making it challenging to differentiate logs for individual containers. Additionally, the CloudWatch Agent runs as a separate process, consuming resources and requiring high availability to ensure logs are forwarded without interruption. Any failure or misconfiguration in the agent can result in missing or delayed log data.

Using ‘awslogs’ Driver

‘awslogs’ driver, is a native docker logging driver and eliminates the need for an additional process like the CloudWatch Agent. It creates unique log streams for each container, making it easier to track and analyze logs

Setting up ‘awslogs’ driver

  1. Firstly, we create a log group in CloudWatch, let’s name it ‘DockerLogsEC2’

  2. Now, configure docker logging by adding the following script in daemon.json

     sudo nano /etc/docker/daemon.json
    
     {
       "log-driver": "awslogs",
       "log-opts": {
         "awslogs-region": "eu-north-1",
         "awslogs-group": "DockerLogsEC2",
         "tag": "{{.Name}}-{{.ID}}"
       }
     }
    
  3. Next, restart docker

     sudo systemctl restart docker
    
  4. Now, we run a docker container and see if logs are available in CloudWatch.

     docker run hello-world  #Run a hello-world container
    

  5. As seen above, log stream has been created with container name and its ID. We can also test whether another stream will be created or not for another container

     docker run alpine echo "Hello Aarush"   #Runs alpine container and prints the message
    

    A new log stream will be created for every container!

Conclusion

Sending docker logs to Amazon CloudWatch is an effective way to centralize log management, improve system monitoring and simplify troubleshooting. Additionally, CloudWatch offers CloudWatch Insights, a powerful tool for querying logs and gaining deeper insights into your data. Both the Amazon CloudWatch Agent and the awslogs driver serve distinct roles

CloudWatch agent is best suited for environments where log collection from various sources such as system-level metrics and application logs is required.

On the other hand, the ‘awslogs’ driver offers a simpler, more streamlined setup for basic logging needs. It’s suitable for use cases where one wants to push container logs to CloudWatch without needing additional customization

Ultimately, integrating CloudWatch into your Docker workflows can significantly streamline operations, improve reliability, and help maintain better control over your infrastructure.

Happy Learning

Aarush Luthra

Connect with me on LinkedIn : Aarush Luthra

Did you find this article valuable?

Support AWS with Aarush by becoming a sponsor. Any amount is appreciated!