Containers revolutionized the approach to software evelopment in modern times. They are key component of microservices - software development technique. In this tutorial, I’m going to demonstrate to you in a brief on how to build, manage and operate containers in the AWS cloud.

ecs1

Before we start to build and start the first containers, let’s discuss the prerequisites. We need a Linux environment (it can be Windows, however, to simplify the stuff, let’s do this on Linux) with an installed newest version of the Docker and AWSCLI. Next, we a need configured network stack. We can use the VPC which I created in one of my previous article VPC and Terraform. Finally, we need GIT :) Let’s get started!

If you don’t know how to install Docker, follow the below example. I’ll show you how to create an EC2 instance with Amazon Linux AMI and install necessary packages.

Before you start, make sure you configured AWS CLI credentials. You need also the VPC id and subnet-id from your VPC where you are going to run EC2 instance (make sure this subnet is public) and image-id. Please make sure you are using the image-id for Amazon Linux AMI.

  • Create a security group and note a group ID.
$ aws ec2 create-security-group --group-name AWS-Jumpbox --description "AWS Jumpbox" --vpc-id 
  • Configure the security group to allow on access over port TCP22 (SSH)
$ aws ec2 authorize-security-group-ingress --group-id <your group id> --protocol tcp --port 22 --cidr 0.0.0.0/0 --region <your aws region>
  • Create the SSH Key Pair, and upload it to AWS.

$ ssh-keygen -t rsa -f ~/.ssh/aws-jumpbox-key
# Do not create a passphrase
$ aws ec2 import-key-pair --region <your aws region> --key-name aws-jumpbox-key --public-key-material file://~/.ssh/aws-jumpbox-key.pub
  • Create a text file script.sh with the below content
#!/bin/bash
yum update -y
yum install docker -y
curl -o /usr/local/bin/ecs-cli https://s3.amazonaws.com/amazon-ecs-cli/ecs-cli-linux-amd64-latest
chmod +x /usr/local/bin/ecs-cli
service docker start
usermod -aG docker ec2-user
  • Finally, run this command to start your EC2 instance
$ aws ec2 run-instances --image-id <image-id-ami> --count 1 --instance-type t2.micro --key-name aws-jumpbox-key --user-data file://script.sh --subnet-id <subnet-id> --security-group-ids <security-group-id> --associate-public-ip-address

Note, this is just an example. For more options, go to AWS CLI Command Reference.

You will get a list of JSON based information about your EC2 instance. You can grab InstanceId, and run below command to find the public IP associated with our instance. Next, connect to this machine using SSH and check Docker version installed.

$ aws ec2 describe-instances --instance-ids <instance-id> --region <your aws region> |grep "PublicIpAddress"
$ ssh ec2-user@<public-ip-address-ec2> -i ~/.ssh/aws-jumpbox-key
$ docker info

Now it’s time to create a Docker registry - the place where we’re going to store our docker images. AWS comes with great and easy to set up service called Elastic Container Registry. Go to AWS Console -> Elastic Container Service -> Repositories. Click on get started and select the name of our repository. I’ll name it jenkins. AWS will automatically assign repository URI. Click on next step. You will get a nice tutorial on how to use this repository.

ecs2

You can also use AWS CLI to create the repository, just type the following commands

$ aws ecr create-repository --repository-name jenkins

Next go to your Linux shell and run below command to retrieve login command which we can use to authenticate docker client to your registry (it might vary depends on your current region).

!Make sure you have AWS CLI version > 1.11.91. Otherwise, you will get an error message that –no-include-email flag is not supported

$ $(aws ecr get-login --no-include-email --region eu-west-1) 

It will automatically login you to the AWS ECR. Now our repository is ready to use. Time to create Dockerfile, where we configure our new Jenkins Docker image. I’m going to customize it a bit, by installing some of the most popular plugins and tools. Create a plugins.txt file with below content:

git-client:2.7.3
credentials:2.1.18
apache-httpcomponents-client-4-api:4.5.5-3.0
jsch:0.1.54.2
ssh-credentials:1.14
structs:1.15
aws-codepipeline:0.38

Next, create our Dockerfile.

FROM jenkins/jenkins:lts

USER root

RUN apt-get update && apt-get install net-tools -y

USER jenkins

COPY plugins.txt /usr/share/jenkins/plugins.txt

RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt

Build the docker image and add a tag. It might takee some time because docker will pull Jenkins image from docker hub repository.

$ docker build -t jenkins .
$ docker tag jenkins:latest <your-aws-account-id>.dkr.ecr.eu-west-1.amazonaws.com/jenkins:latest

Finally, run the following command to push the image to your newly created AWS repository

$ docker push <your-aws-account-id>.dkr.ecr.eu-west-1.amazonaws.com/jenkins:latest

Now you can go to AWS ECR and browse the content of your repository to see the new image. Optionally, run below command to see it.

$ aws ecr list-images --repository-name jenkins

Once we have our image ready, it’s a time to start create our ECS infrastructure and deploy the service. There are few methods of doing that. You can create an infrastructure using web interface, aws cli or even Terraform. Because I’m used to using the docker-compose style for building docker environments, I’m going to use ecs-cli tool.

ECS-CLI has been installed during provisioning of our EC2 instance, using the user-data script. Before you can start using it, first you must configure it.

ecs-cli configure profile --access-key AWS_ACCESS_KEY_ID --secret-key AWS_SECRET_ACCESS_KEY --profile-name <your-custom-name>

Now let’s create a cluster. I’m going to use two EC2 based instances for my new cluster called “dev-cluster”.

$ ecs-cli configure --cluster dev-cluster --region eu-west-1 --default-launch-type EC2 --config-name dev-cluster
$ ecs-cli up --keypair <ssh-key-pair> --capability-iam --size 2 --instance-type t2.micro --cluster-config dev-cluster

This command will automatically create for you the whole infrastructure, including VPC, subnets, IAM roles, EC2 instances, and cluster. Next step is to create a docker-compose file with the description of our container. I’m going to use a very simple example, just for the purpose of the test

# The name of the folder, defines name of the ECS task
$ mkdir jenkins-dev
$ cd jenkins-dev
$ vi docker-compose.yml

docker-compose.yml

version: '2'
services:
 jenkins:
  image: <your-aws-account-id>.dkr.ecr.eu-west-1.amazonaws.com/jenkins:latest
  ports:
   - '80:8080'
   - '50000:50000'
  logging:
   driver: awslogs
   options:
    awslogs-group: dev-jenkins
    awslogs-region: eu-west-1
    awslogs-stream-prefix: jenkins
  volumes:
   - 'jenkins_home:/var/jenkins_home'
volumes:
 jenkins_home:

Time to start our docker container

$ ecs-cli compose up --create-log-groups --file docker-compose.yml --cluster-config dev-cluster

Go to AWS Console -> ECS -> Cluster -> dev-cluster to see your newly created ECS task.

ecs3

We can now access our Jenkins instance. Let’s click on Container instance used to your task. There you can find the public IP associated within. Open your browser and go to this address. You should see the Jenkins login panel.

ecs4

Initial admin passwords can be found in CloudWatch Logs associated with this task. Go to CloudWatch -> Logs -> dev-jenkins -> look for latest log and pattern similar to below ones

ecs5

Simply copy it and paste to Jenkins login dashboard. For now, let’s skip the initial configuration and go to Manage Jenkins -> Manage Plugins. You will see the plugins we selected during building our Docker image.

ecs6

So far we created a task definition and start single a task. This is good for short running jobs, but if we plan to run the long-running process and use all the high-availability capabilities of ECS, we should create a service. You can do this, by running below commands

# First stop the containers from compose file
$ ecs-cli compose down --file docker-compose.yml --cluster-config dev-cluster
# Start the service
$ ecs-cli compose --file docker-compose.yml service up --cluster-config dev-cluster

Then go back to the AWS console to see newly created service in ECS Cluster.

ecs6

That’s all for this tutorial :) I hope you understood some basics of containerization well a Docker. I’m going to continue this topic in my next posts.

Don’t forget to stop your service and remove the cluster. Otherwise AWS, can bill you for the usage :)

$ ecs-cli compose --file docker-compose.yml service rm --cluster-config dev-cluster
$ ecs-cli down --force --cluster-config dev-cluster

Thanks for the reading, and I hope you spent a good time here.