Welcome back to my blog! In this project, we will be diving into Docker and how it can make things easier for a Cloud Engineer. We will cover why it is a great skill to have on your tool belt and also how it seamlessly integrates into AWS resources.
The task that we will be completing in this lab: Your team needs you to deploy a custom image quickly and write a quick script in a file to accomplish a task.
Things we need to know before we get started:
- What is Docker? Docker is an open source tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. By doing so, thanks to the container, the developer can rest assured that the application will run on any other Linux machine regardless of any customized settings that machine might have that could differ from the machine used for writing and testing the code.
- What is AWS S3? Amazon S3 or Amazon Simple Storage Service is a service offered by Amazon Web Services that provides object storage through a web service interface. Amazon S3 uses the same scalable storage infrastructure that Amazon.com uses to run its global e-commerce network.
- What is Nginx? Nginx is a popular lightweight web application that is used for developing server-side applications. It is an open-source web server that is developed to run on a variety of operating systems.
What we will need to do before we get started:
- Download the AWS CLI to your local machine. If you check out my lab, “Creating an Autoscaling Group From The CLI”, I go over how to download the AWS CLI and the advantages the CLI has over using the AWS Console GUI.
- Set up a Docker account with Docker Desktop and CLI access. Download this to your local machine
- A text editor of your choice already downloaded and ready to go. I chose to use Microsoft VS Code
Now that we have that out of the way, here are the 4 tasks that we will accomplish in this lab:
- We will create a custom Docker image using Nginx
- We will add some date/time text to the file so that it doesn’t display the default page
- We will deploy the container with port 8080 open
- We will then store the data in a AWS S3 bucket
Here are a couple websites that you should have open in a separate tab that will help you understand the commands and what each letter represent especially if you want to custom build your commands. I will do my best to explain them but for text book explanations, refer back to the actual documentation:
- Docker’s documentation that explains how to run your image
- AWS documentation that explains how to upload files to S3 from the CLI
- Docker’s documentation that explains the variables for building your container
Alright, hopefully you’re still with me. Let’s go ahead and get this party started!
Step 1- Build your image
To start things off, we have to first pull theNginx image to our terminal so that we can build the custom image from it. To do this, first go to Nginx’s website to get the pull command. When you want to build a Docker image, whether it is a custom image or just the standard image, you want to head over to the Docker hub webpage to look for the image you’re trying to build and then find the command to use. For Nginx, we used the following command:
$ docker pull nginx
Next thing we are going to do is create a directory and cd into that directory to hold our Dockerfile and index.html file (we’ll get more into this later). Once you are in the directory, create a file called Dockerfile and index.html file. It is important to have a Dockerfile already established before you build your image due to when you get to the next step, your Docker build command will look at your Dockerfile as instructions on how to build your image. Let me explain what each part of my Dockerfile is:
FROM creates a layer
COPY adds files from your Docker client’s current directory.
CMD specifies what command to run within the container.
EXPOSE instruction indicates the ports on which a container listens for connections.
Now we will be using our text editor to add some code to that index.html file that we just created. This step is very important as well since it provides a script for Nginx to follow when displaying our webpage. The script that I will be using is to display the date and time on our static Nginx webpage instead of the default “welcome to Nginx” page. *Changing this default page was a lot harder than I initially thought but we will tackle that later.
Once you get to this step, this next command will build your image following the instructions that you have in your Dockerfile. But before we build this image, let’s go over what makes up the command. The “-t” is short hand for “tag” which essentially names the image that you are building. Please do not forget the trailing “.” in the command either. This “.” specifies that the path is in the local directory. Now that we have that out of the way, the command that you would use is:
$ docker build -t <name> .
Step 2- Docker Run
Once we have our custom image built, we can now build the container. The command to build a container will always start with “docker run” and then you can refer back to the webpage that I posted above to choose the variables that you want to use in your command. “-it” is to keep STDIN open even if not attached and to allocate a pseudo-TTY. “-rm” automatically removes the container when it exits. “-d” runs the container in the background and prints container ID. “-p” publish’s the container’s port(s) to the host. I chose 8080:80 as the port that I want to open but this really isn’t needed since I chose to expose it in my Dockerfile. I just forgot that I did it when I was typing this command out, lol. Here is the following command:
$ docker run -it — rm -d -p 8080:80 — name marshallcontainer nginx
Once that is done, you will unfortunately STILL see the default Nginx webpage, lol. But no worries, we will take care of this in the next step. Just rejoice in the fact that the container is up and running. We can always change the webpage itself. FYI, to get to this page, type in localhost:8080 into your web browser’s search bar.
Now we will change the default page. This command will allow you to enter the container itself and execute commands inside of it. We are going to first display the default index.html file by using “cat index.html” which should display what I have posted directly above. This is the default Nginx page that we do not want for this project. I then created a new directory and changed into that directory. We will then create another index.html file and use VS Code again to copy and paste our time/date script inside of this file.
Once that is complete, we should then be able to see our custom image that we built. Now we have to upload this file to AWS S3.
Step 3- Upload our container to S3
Now that the difficult part is out of the way, all we have to do now is make our bucket using the command below and then upload the file to it. Keep in mind that S3 is a GLOBAL namespace so make sure that this is a unique name due to no two buckets can share the same name.
$ aws s3 mb s3://name
We can always head over to the console to double check our work. Once you see that your bucket is displayed, you are good to continue.
Now we will zip our docker-demo directory that contains our Dockerfile and index.html file so that we can have it ready to upload to AWS S3. Use the following command below:
zip -r whateveryouwanttonamethiszipfile nameofyourfolder
Now we will upload our zip file to aws s3. Use the command below to upload it. If it was successful, you should get an upload return from your aws cli.
aws s3 cp nameofyourzipefile s3://yourbucketname/nameofyourzipfile
Now we will double check our work to make sure that we were successful. If your zip file shows up under your bucket, congratulations. You have now completed this lab.
Thanks for joining me on this lab and I hope you have a good one!