{"id":345,"date":"2019-04-12T21:08:36","date_gmt":"2019-04-12T21:08:36","guid":{"rendered":"https:\/\/bootstrap-it.com\/blog\/?p=345"},"modified":"2019-04-12T21:08:36","modified_gmt":"2019-04-12T21:08:36","slug":"an-in-depth-introduction-to-docker-on-aws","status":"publish","type":"post","link":"https:\/\/bootstrap-it.com\/blog\/?p=345","title":{"rendered":"An in-depth introduction to Docker on AWS"},"content":{"rendered":"<div id=\"s-share-buttons\" class=\"horizontal-w-c-circular s-share-w-c\"><a href=\"http:\/\/www.facebook.com\/sharer.php?u=https:\/\/bootstrap-it.com\/blog\/?p=345\" target=\"_blank\" title=\"Share to Facebook\" class=\"s3-facebook hint--top\"><\/a><a href=\"http:\/\/twitter.com\/intent\/tweet?text=An in-depth introduction to Docker on AWS&url=https:\/\/bootstrap-it.com\/blog\/?p=345\" target=\"_blank\"  title=\"Share to Twitter\" class=\"s3-twitter hint--top\"><\/a><a href=\"http:\/\/reddit.com\/submit?url=https:\/\/bootstrap-it.com\/blog\/?p=345&title=An in-depth introduction to Docker on AWS\" target=\"_blank\" title=\"Share to Reddit\" class=\"s3-reddit hint--top\"><\/a><a href=\"http:\/\/www.linkedin.com\/shareArticle?mini=true&url=https:\/\/bootstrap-it.com\/blog\/?p=345\" target=\"_blank\" title=\"Share to LinkedIn\" class=\"s3-linkedin hint--top\"><\/a><a href=\"mailto:?Subject=An%20in-depth%20introduction%20to%20Docker%20on%20AWS&Body=Here%20is%20the%20link%20to%20the%20article:%20https:\/\/bootstrap-it.com\/blog\/?p=345\" title=\"Email this article\" class=\"s3-email hint--top\"><\/a><\/div>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/2400\/1*6kZO8kBIc4hQPRYS-Q47nQ.jpeg\" alt=\"\"\/><figcaption>Photo by&nbsp;<a href=\"https:\/\/unsplash.com\/photos\/tjX_sniNzgQ?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText\" rel=\"noreferrer noopener\" target=\"_blank\">frank mckenna<\/a>&nbsp;on&nbsp;<a href=\"https:\/\/unsplash.com\/search\/photos\/container?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText\" rel=\"noreferrer noopener\" target=\"_blank\">Unsplash<\/a><\/figcaption><\/figure>\n\n\n\n<p>Container virtualization\u200a\u2014\u200amost visibly represented by Docker\u200a\u2014\u200ais a<br>server paradigm that will likely drive enterprise computing for years<br>to come.<\/p>\n\n\n\n<p>The Cloud is the most obvious and logical platform for container<br>deployment.<\/p>\n\n\n\n<p>Amazon Web Services largely dominates the cloud computing world.<br>Add it up. If you\u2019re interested in getting a piece of all this action, you\u2019ll<br>definitely want to figure out how it all works.<\/p>\n\n\n\n<p>First, though, let\u2019s quickly define some key terms.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"3507\">Virtualization<\/h3>\n\n\n\n<p>Virtualization is the division of physical computer and networking resources into smaller, more flexible units, presenting these smaller units to users as though each was a discrete resource.<\/p>\n\n\n\n<p>The idea is that, instead of assigning specific computing tasks to individual physical servers\u200a\u2014\u200awhich may sometimes end up being over- or underused<br>\u200a\u2014\u200aa single physical server can be logically divided into as few or as many virtual servers as needed.<\/p>\n\n\n\n<p>That means, as the figure below illustrates, there can be dozens of individually installed operating systems (OS) running side by side on the same hard drive. Each OS is effectively unaware that it isn\u2019t all alone in its local environment.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/1600\/1*3LwvyzY0vH1anumAMJpT-Q.png\" alt=\"\"\/><figcaption>Multiple applications being served through physical servers or, via VMs, from a single virtualization server<\/figcaption><\/figure>\n\n\n\n<p>Practically, each operating system instance can be accessed remotely by both administrators and customers in exactly the same way as any other server.<\/p>\n\n\n\n<p>In this kind of environment, as soon as your virtual server completes its task or becomes unnecessary, you can instantly delete it. This will free up the resources it was using for the next task in the queue.<\/p>\n\n\n\n<p>There\u2019s no need to over-provision virtual servers to anticipate possible future needs, because future needs can be easily met whenever they arrive.<\/p>\n\n\n\n<p>In fact, today\u2019s virtual server might only live a few minutes or even seconds before, having completed its task, being shut down for good to make room for whatever\u2019s next. All this allows for far more efficient use of expensive hardware. It provides the ability to provision and launch new servers at will, either to test new configurations or add fresh power to your production services.<\/p>\n\n\n\n<p>Cloud computing providers like AWS use virtualized computers of one kind or another. The hundreds of thousands of&nbsp;<a href=\"https:\/\/aws.amazon.com\/ec2\/\" rel=\"noreferrer noopener\" target=\"_blank\">Amazon EC2<\/a>&nbsp;instances, for example, all run on top of the open source&nbsp;<a href=\"https:\/\/www.xenproject.org\/\" rel=\"noreferrer noopener\" target=\"_blank\">Xen<\/a>&nbsp;or&nbsp;<a href=\"https:\/\/www.linux-kvm.org\/page\/Main_Page\" rel=\"noreferrer noopener\" target=\"_blank\">KVM<\/a>&nbsp;<a href=\"https:\/\/en.wikipedia.org\/wiki\/Hypervisor\" rel=\"noreferrer noopener\" target=\"_blank\">hypervisors<\/a>\u200a\u2014\u200awhich are themselves installed and running on many thousands of physical servers maintained in Amazon\u2019s vast server farms.<\/p>\n\n\n\n<p>Whatever hypervisor technology is being used, the goal is to provide a largely automated hosting environment for multiple complete, self-contained virtual computers.<\/p>\n\n\n\n<p>Containers like Docker, on the other hand, aren\u2019t standalone virtual machines but are modified file systems sharing the operating system kernel of their physical host. That\u2019s what we\u2019ll discuss next.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"d7bc\">Containers<\/h3>\n\n\n\n<p>What are containers? Well, for one thing, they\u2019re not hypervisors. Instead, they\u2019re extremely lightweight virtual servers that, as you can see from the figure, rather than running as full operating systems, share the underlying kernel of their host OS.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/1600\/1*LXn6LnuC7tR5GdrivpD3oQ.png\" alt=\"\"\/><figcaption>Virtualized containers running with access to their host\u2019s kernel and hardware resources<\/figcaption><\/figure>\n\n\n\n<p>Containers can be built from plain-text scripts, created and launched in seconds, and easily and reliably shared across networks. Container technologies include the&nbsp;<a href=\"https:\/\/linuxcontainers.org\/\" rel=\"noreferrer noopener\" target=\"_blank\">Linux Container<\/a>&nbsp;project, which was Docker\u2019s original inspiration.<\/p>\n\n\n\n<p>The script-friendly container design makes it easy to automate and remotely manage complex clusters of containers, often deployed as microservices.<\/p>\n\n\n\n<p>Microservices is a compute services architecture where multiple containers are deployed, each with a distinct yet complementary role. You might, therefore, launch one container as a database back-end, another as a file server, and a third as a web server.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"0ad4\">Docker<\/h4>\n\n\n\n<p>As I explore in&nbsp;<a href=\"http:\/\/pluralsight.pxf.io\/c\/1191769\/424552\/7490?subId1=solving&amp;u=https%3A%2F%2Fapp.pluralsight.com%2Fprofile%2Fauthor%2Fdavid-clinton\" rel=\"noreferrer noopener\" target=\"_blank\">one or two of my Pluralsight courses<\/a>, a Docker container is an image whose behavior is defined by a script. The container is launched as a software process that\u2019s cunningly disguised as a server.<\/p>\n\n\n\n<p>But what\u2019s an image? It\u2019s a software file containing a snapshot of a full operating system file system. Everything necessary to launch a viable virtual server is included.<\/p>\n\n\n\n<p>An image might consist of just a base operating system like Ubuntu Linux, or the tiny and super-fast Alpine Linux. But an image could also include additional layers with software applications like web servers and databases. No matter how many layers an image has and how complicated the relationships between them might be, the image itself never changes.<\/p>\n\n\n\n<p>When, as shown in the next figure, an image is launched as a container, an extra writable layer is automatically added into which the record of any ongoing system activity is saved.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/1600\/1*m3AjYg_AJC5PgTLL_Nki1A.png\" alt=\"\"\/><figcaption>A simple MySQL\/Ubuntu Docker image illustrated with its writable data&nbsp;layer<\/figcaption><\/figure>\n\n\n\n<p>What do folks commonly do with their Docker containers? Often, they\u2019ll load up some kind of app development project to test how it will work, and then share it with team members for feedback and updates. When the app is complete, it can be launched as a cluster of containers (or \u201cswarm\u201d as Docker calls it) that can be programmatically and instantly scaled up or down according to user demand.<\/p>\n\n\n\n<p>While Docker is a Linux-based technology and requires a Linux kernel to run, running remote or even local Docker containers on Mac or Windows machines is possible through either the Docker for Mac or Docker for Windows apps or for older machines, through the Docker Machine tool.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"7139\">Cloud computing<\/h3>\n\n\n\n<p>Cloud computing is the provision of on-demand, self-service compute,<br>memory, and storage resources remotely over a network.<\/p>\n\n\n\n<p>Since cloud-based services are billed in very small increments, you can<br>quickly configure and launch a wide range of projects. And since the<br>resources are all virtual, launching them as part of an experiment or to<br>solve some short-term problem will often make lots of sense. When the<br>work\u2019s all done, the resource is shut down.<\/p>\n\n\n\n<p>Cloud platforms let you do things that would be impossible\u200a\u2014\u200aor impossibly<br>expensive\u200a\u2014\u200aanywhere else.<\/p>\n\n\n\n<p>Unsure how long your project will run or how much demand it will attract?<br>Perhaps purchasing, building, and housing all the expensive hardware you\u2019d need to properly support your project in-house can\u2019t be justified.<\/p>\n\n\n\n<p>Investing heavily in server, cooling, and routing equipment might just not<br>make sense.<\/p>\n\n\n\n<p>But if you could rent just enough of someone else\u2019s equipment to match<br>fast-changing demand levels and pay only for what you actually use, then<br>it might work.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"fe29\">AWS<\/h3>\n\n\n\n<p>There\u2019s no shortage of ways to manage Docker containers on AWS. In fact, between frameworks, orchestration interfaces, image repositories,<br>and hybrid solutions, the variety can get confusing.<\/p>\n\n\n\n<p>This article won\u2019t dive deeply into every option, but you should at least be aware of all your choices:<\/p>\n\n\n\n<p>Amazon\u2019s&nbsp;<a href=\"https:\/\/aws.amazon.com\/ecs\/\" rel=\"noreferrer noopener\" target=\"_blank\">EC2 Container Service<\/a>(ECS) leverages specially configured EC2 instances as hosts for integrated Docker containers. You don\u2019t have to get your hands dirty on the EC2 instance itself, as you can provision and administrate your containers through the ECS framework. ECS now offers greater abstraction (and simplicity) through their Fargate mode option.<\/p>\n\n\n\n<p><a href=\"https:\/\/aws.amazon.com\/cloudformation\/\" rel=\"noreferrer noopener\" target=\"_blank\">AWS CloudFormation<\/a>&nbsp;allows you to configure any combination of AWS resources into a template that can be deployed one or many times. You can include specified dependencies and custom parameters in the template. Given its self-contained and scriptable design, CloudFormation is a natural environment for Docker deployments. In fact, Docker itself offers its Docker for AWS service (currently in beta), that will automatically generate a CloudFormation template to orchestrate a swarm of Docker containers to run on AWS infrastructure within your account.<\/p>\n\n\n\n<p><a href=\"https:\/\/aws.amazon.com\/elasticbeanstalk\/\" rel=\"noreferrer noopener\" target=\"_blank\">AWS Elastic Beanstalk<\/a>&nbsp;effectively sits on top of ECS. It allows you to deploy your application across all the AWS resources normally used by ECS, but with virtually all of the logistics neatly abstracted away. Effectively, all you need in order to launch a fully scalable, complex microservices environment is a declarative JSON-formatted script in a file called&nbsp;<code>Dockerrun.aws.json<\/code>. You can either upload your script to the GUI or, from an initialized local directory using the AWS Beanstalk CLI.<\/p>\n\n\n\n<p><a href=\"https:\/\/aws.amazon.com\/eks\/\" rel=\"noreferrer noopener\" target=\"_blank\">Amazon Elastic Container Service for Kubernetes<\/a>&nbsp;(EKS) is currently still in preview. It\u2019s a tool allowing you to manage containers using the open source Kubernetes orchestrator, but without having to install your own clusters. Like ECS, EKS will deploy all the necessary AWS infrastructure for your clusters without manual intervention.<\/p>\n\n\n\n<p><a href=\"https:\/\/www.docker.com\/docker-aws\" rel=\"noreferrer noopener\" target=\"_blank\">Docker for AWS<\/a>&nbsp;is, at the time of writing, still in beta. Using its browser interface, you can use the service to install and run a \u201cswarm of Docker Engines\u201d that are fully integrated with AWS infrastructure services like auto scaling, load balancing (ELB), and block storage.<\/p>\n\n\n\n<p>Docker Datacenter (now marketed as part of&nbsp;<a href=\"https:\/\/www.docker.com\/enterprise-edition\" rel=\"noreferrer noopener\" target=\"_blank\">Docker Enterprise Edition<\/a>) is a joint AWS\/Docker project that provides commercial customers with a more customizable interface for integrating Docker with AWS, Azure, and IBM infrastructures.<\/p>\n\n\n\n<p><a href=\"https:\/\/cloud.docker.com\/\" rel=\"noreferrer noopener\" target=\"_blank\">Docker Cloud<\/a>, much like Docker Datacenter, offers a GUI, browser-based console for managing all aspects of your Docker deployments. This includes administration for your host nodes running in public clouds. The big difference is that, unlike Datacenter, the Docker Cloud administration service is hosted from its own site. There\u2019s no server software to install on your own equipment.<\/p>\n\n\n\n<p><a href=\"https:\/\/hub.docker.com\/\" rel=\"noreferrer noopener\" target=\"_blank\">Docker Hub<\/a>&nbsp;is probably the obvious first place to look for and to share Docker images. Provided by Docker itself, Docker Hub holds a vast collection of images that come pre-loaded to support all kinds of application projects. You can find and research images on the hub.docker.com web site, and then pull them directly into your own Docker Engine environment.<\/p>\n\n\n\n<p><a href=\"https:\/\/aws.amazon.com\/ecr\/\" rel=\"noreferrer noopener\" target=\"_blank\">EC2 Container Registry<\/a>(ECR) is Amazon\u2019s own image registry to go with their EC2 Container Service platform. Images can be pushed, pulled, and managed through the AWS GUI or CLI tool. Permissions policies can closely control image access only to the people you select.<\/p>\n\n\n\n<p>I think you\u2019re ready to start. If you haven\u2019t yet, do head over to the<br>Amazon Web Services site to create an AWS account. In case you\u2019re not<br>yet familiar with how this all works, new accounts get a generous full year<br>of experimentation with any service level that\u2019s eligible for the Free Tier.<br>Assuming you\u2019re still in your first year, nothing we\u2019re going to do in this<br>course should cost you a penny.<\/p>\n\n\n\n<p>Next, we\u2019ll pop the lid off Docker and see how it works at its most basic level: your laptop command line. Technically, this has very little relevance to AWS workloads, but it\u2019ll be a great way to better understand the workflow.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"3d9b\">Introduction to&nbsp;Docker<\/h3>\n\n\n\n<p>Properly visualizing how all the many AWS parts work will probably be<br>easier if you first understand what\u2019s going on under the hood with Docker<br>itself. So in this article I\u2019ll walk you through launching and configuring<br>a simple Docker container on my local workstation.<\/p>\n\n\n\n<p>Ready to go?<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"b4ed\">The Docker command&nbsp;line<\/h4>\n\n\n\n<p>Let\u2019s see how this thing actually works. I\u2019m going to get Docker up and running on my local workstation and then test it out with a quick hello-world operation. I will then pull a real working Ubuntu image and run it.<\/p>\n\n\n\n<p>I won\u2019t go through the process of installing Docker on your machine here for a few reasons. First of all, the specifics will vary greatly depending on the operating system you\u2019re running. But they\u2019re also likely to frequently change, so anything I write here will probably be obsolete within a short while. And finally, none of this is all that relevant to AWS. Check out Docker\u2019s own instructions at&nbsp;<a href=\"https:\/\/docs.docker.com\/install\" rel=\"noreferrer noopener\" target=\"_blank\">docs.docker.com\/install<\/a>.<\/p>\n\n\n\n<p>Along the way I\u2019ll try out some of Docker\u2019s command line tools, including creating a new network interface and associating a container with it. This is the kind of environment configuration that can be very useful for real-world deployments involving multiple tiers of resources that need to be logically separated.<\/p>\n\n\n\n<p>Most Linux distributions now use&nbsp;<a href=\"https:\/\/www.freedesktop.org\/wiki\/Software\/systemd\/\" rel=\"noreferrer noopener\" target=\"_blank\">systemd<\/a>&nbsp;via the&nbsp;<a href=\"https:\/\/www.freedesktop.org\/software\/systemd\/man\/systemctl.html\" rel=\"noreferrer noopener\" target=\"_blank\">systemctl<\/a>&nbsp;command to handle processes. In this case&nbsp;<code>systemctl start docker<\/code>&nbsp;will launch the Docker daemon if it\u2019s not already running.&nbsp;<code>systemctl status docker<\/code>&nbsp;will return some useful information, including in-depth error messages if something has gone wrong. In this case, everything looks healthy.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"># systemctl start docker<br># systemctl status docker<\/pre>\n\n\n\n<p>That\u2019s the only Linux-specific bit. From here on in we\u2019ll be using commands that\u2019ll work anywhere Docker\u2019s properly installed.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"b333\">Launch a container<\/h4>\n\n\n\n<p>Running commands from the Docker command line always begins with the word \u201cdocker\u201d. The normal first test of a newly installed system is to<br>use&nbsp;<code>docker run<\/code>&nbsp;to launch a small image\u200a\u2014\u200athe purpose-built \u201chello-world\u201d image in this case.<\/p>\n\n\n\n<p>As you can tell from the output below, Docker first looked for the image on the local system. Docker is particularly efficient in that way. It will always try to reuse locally available elements before turning to remote sources.<\/p>\n\n\n\n<p>In this case, since there are no existing images in this new environment, Docker goes out to pull hello-world from the official Docker library.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ docker run hello-world<br>Unable to find image \u2018hello-world:latest\u2019 locally<br>latest: Pulling from library\/hello-world<br>ca4f61b1923c: Pull complete<br>Digest: sha256:66ef312bbac49c39a89aa9bcc3cb4f3c9e7de3788c9<br>44158df3ee0176d32b751<br>Status: Downloaded newer image for hello-world:latest2.1. <br>Hello from Docker!<br>This message shows that your installation appears to be<br>working correctly. To generate this message, Docker took the<br>following steps:<br>1. The Docker client contacted the Docker daemon.<br>2. The Docker daemon pulled the \u201chello-world\u201d image<br>from the Docker Hub. (amd64)<br>3. The Docker daemon created a new container from that<br>image which runs the executable that produces the output you<br>are currently reading.<br>4. The Docker daemon streamed that output to the Docker client,<br>which sent it to your terminal.<br>To try something more ambitious, you can run an Ubuntu container<br>with:<br>$ docker run -it ubuntu bash<br>Share images, automate workflows, and more with a free Docker ID:<br><a href=\"https:\/\/cloud.docker.com\/\" rel=\"noreferrer noopener\" target=\"_blank\">https:\/\/cloud.docker.com\/<\/a><br>For more examples and ideas, visit:<br><a href=\"https:\/\/docs.docker.com\/engine\/userguide\/\" rel=\"noreferrer noopener\" target=\"_blank\">https:\/\/docs.docker.com\/engine\/userguide\/<\/a><\/pre>\n\n\n\n<p>The full output of this command includes a useful four part description of what just happened. The Docker client contacted the Docker daemon which proceeded to download the hello-world image from the repository. The image is converted to a running container by the docker run command whose output is streamed to our command line shell\u200a\u2014\u200athe Docker client.<\/p>\n\n\n\n<p>Let me break that jargon down for you just a bit:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><strong>Docker client<\/strong>\u200a\u2014\u200athe command line shell activated by running docker<br>commands<\/li><li><strong>Docker daemon<\/strong>\u200a\u2014\u200athe local Docker process we started just before<br>with the&nbsp;<code>systemctl<\/code>&nbsp;command<\/li><li><strong>Image\u200a<\/strong>\u2014\u200aa file containing the data that will be used to make up an<br>operating system<\/li><\/ul>\n\n\n\n<p>Typing just&nbsp;<code>docker<\/code>&nbsp;will print a useful list of common commands along<br>with brief descriptions, and d<code>ocker info<\/code>&nbsp;will return information about<br>the current state of our Docker client.<\/p>\n\n\n\n<p>Notice how we\u2019ve currently got one container and one image (the hello-world container) and that there are zero containers running right now.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ docker info<br>Containers: 1<br>Running: 0<br>Paused: 0<br>Stopped: 1<br>Images: 3<br>Server Version: 1.13.1<br>Storage Driver: aufs<br>Root Dir: \/var\/lib\/docker\/aufs<br>Backing Filesystem: extfs<br>Dirs: 28<br>Dirperm1 Supported: true<br>Logging Driver: json-file<br>Cgroup Driver: cgroupfs<br>Plugins:<br>Volume: local<br>Network: bridge host macvlan null overlay<\/pre>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"830b\">Interactive container sessions<\/h4>\n\n\n\n<p>Let\u2019s try out the \u201cmore ambitious\u201d&nbsp;<code>docker run -it ubuntu bash<\/code>&nbsp;command that the Docker documentation previously suggested. This will download the latest official base Ubuntu image and run it as a container.<\/p>\n\n\n\n<p>The&nbsp;<code>-i&nbsp;<\/code>option will make the session interactive, meaning you\u2019ll be dropped into a live shell within the running container where you\u2019ll be able to control things like you would on any other server. The&nbsp;<code>-t<\/code>&nbsp;argument will open a TTY shell.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ docker run -it ubuntu bash<br>Unable to find image \u2018ubuntu:latest\u2019 locally<br>latest: Pulling from library\/ubuntu<br>1be7f2b886e8: Pull complete<br>6fbc4a21b806: Pull complete<br>c71a6f8e1378: Pull complete<br>4be3072e5a37: Pull complete<br>06c6d2f59700: Pull complete<br>Digest: sha256:e27e9d7f7f28d67aa9e2d7540bdc2b33254<br>b452ee8e60f388875e5b7d9b2b696<br>Status: Downloaded newer image for ubuntu:latest<br>root@c81a051f6f03:\/#<\/pre>\n\n\n\n<p>Note the new command line prompt&nbsp;<code>root@c81a051f6f03:\/#<\/code>. We\u2019re<br>now actually inside a minimal but working Docker container.<\/p>\n\n\n\n<p>We can, for instance, update our software repository indexes&nbsp;.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"># ls<br>bin dev home lib64 mnt proc run srv tmp var boot etc lib media opt root sbin sys usr<br># apt update<br>Get:1 <a href=\"http:\/\/security.ubuntu.com\/ubuntu\" rel=\"noreferrer noopener\" target=\"_blank\">http:\/\/security.ubuntu.com\/ubuntu<\/a> xenial-security InRelease<br>Get:2 <a href=\"http:\/\/archive.ubuntu.com\/ubuntu\" rel=\"noreferrer noopener\" target=\"_blank\">http:\/\/archive.ubuntu.com\/ubuntu<\/a> xenial InRelease<br>[\u2026]<br>Fetched 24.8 MB in 48s (515 kB\/s)<br>Reading package lists\u2026 Done<br>Building dependency tree<br>Reading state information\u2026 Done<br>6 packages can be upgraded. Run \u2018apt list \u2014 upgradable\u2019 to<br>see them.<\/pre>\n\n\n\n<p>If I exit the container, it will shut down and I\u2019ll find myself back in my host server. Typing&nbsp;<code>docker info<\/code>&nbsp;once more now shows me two stopped containers rather than just one.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ docker info<br>Containers: 2<br>Running: 0<br>Paused: 0<br>Stopped: 2<br>Images: 4<br>[\u2026]<\/pre>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"1359\">Running containers in the background<\/h4>\n\n\n\n<p>I could launch a container in the background by adding the&nbsp;<code>detach=true<\/code>option which will return a container ID. Listing all active docker containers with&nbsp;<code>ps<\/code>&nbsp;will then show me my new&nbsp;<strong>running<\/strong>&nbsp;container.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ docker run -it --detach=true ubuntu bash<br>232a83013d39254657d743864ea46105319b0c1a239bc61e82cf<br>7833554bd82f<br>$ docker ps<br>CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES<br>232a83013d39 ubuntu \u201cbash\u201d 27 sec ago Up 26 sec wizardly_pasteur<\/pre>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"ed5e\">Managing containers<\/h4>\n\n\n\n<p>As you can see from the&nbsp;<code>wizardly_pasteur<\/code>&nbsp;name, the people who designed Docker compiled a rather eccentric pool of names to assign to your containers. If you\u2019d like to rename a container\u200a\u2014\u200aperhaps so managing it will require less typing\u200a\u2014\u200arun&nbsp;<code>docker rename<\/code>, followed by the current container name and the new name you\u2019d like to give it. I\u2019ll run&nbsp;<code>docker ps<\/code>&nbsp;once again to show the update in action.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ docker rename wizardly_pasteur MyContainer<br>$ docker ps<br>CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES<br>232a83013d39 ubuntu \u201cbash\u201d 3 minutes ago Up 5 minutes MyContainer<\/pre>\n\n\n\n<p><code>docker inspect<\/code>&nbsp;followed by a container name, will return pages and pages of useful information about that container\u2019s configuration and environment. The output snippet I\u2019ve included below displays the container\u2019s network environment details. Note that the network gateway is&nbsp;<code>172.17.0.1<\/code>&nbsp;and the container\u2019s actual IP address is&nbsp;<code>172.17.0.2\u200a<\/code>\u2014\u200athat will be useful later.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ docker inspect MyContainer<br>[...]<br>            \"Gateway\": \"172.17.0.1\",<br>            \"GlobalIPv6Address\": \"\",<br>            \"GlobalIPv6PrefixLen\": 0,<br>            \"IPAddress\": \"172.17.0.2\",<br>            \"IPPrefixLen\": 16,<br>            \"IPv6Gateway\": \"\",<br>            \"MacAddress\": \"02:42:ac:11:00:02\",<br>            \"Networks\": {<br>                \"bridge\": {<br>                    \"IPAMConfig\": null,<br>                    \"Links\": null,<br>                    \"Aliases\": null,<br>[...]<\/pre>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"6792\">Docker networks<\/h4>\n\n\n\n<p><code>docker network ls<\/code>&nbsp;will list all the network interfaces currently associated with our Docker client. Note in particular the&nbsp;<code>bridge<\/code>&nbsp;interface which connects a container to the Docker&nbsp;<code>host<\/code>, allowing network communication into and out of the container.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ docker network ls<br>NETWORK ID          NAME            DRIVER        SCOPE<br>fa4da6f158de        bridge          bridge        local<br>18385f695b4e        host            host          local<br>6daa514c5756        none            null          local<\/pre>\n\n\n\n<p>We can create a new network interface by running&nbsp;<code>docker network create<\/code>followed by the name we\u2019d like to give our new interface. Running&nbsp;<code>inspect<\/code>against the new interface shows us\u200a\u2014\u200athrough the&nbsp;<code>Driver<\/code>&nbsp;value\u200a\u2014\u200athat this new interface has been automatically associated with the network&nbsp;<code>bridge<\/code>&nbsp;we saw earlier, but exists on its own&nbsp;<code>172.18.0.x<\/code>&nbsp;network. You\u2019ll remember that our default network used&nbsp;<code>172.17.0.x<\/code>.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ docker network create newNet<br>715f775551522c43104738dfc2043b66aca6f2946919b39ce<br>06961f3f86e33bb<br>$ docker network inspect newNet<br>[<br>    {<br>        \"Name\": \"newNet\",<br> [...]<br>        \"Scope\": \"local\",<br>        \"Driver\": \"bridge\",<br>        \"EnableIPv6\": false,<br>        \"IPAM\": {<br>            \"Driver\": \"default\",<br>            \"Options\": {},<br>            \"Config\": [<br>                {<br>                    \"Subnet\": \"172.18.0.0\/16\",<br>                    \"Gateway\": \"172.18.0.1\"<br> [...]<br>]<\/pre>\n\n\n\n<p>Confused? My Solving for Technology book has a&nbsp;<a href=\"https:\/\/learntech.bootstrap-it.com\/chapter7.html\" rel=\"noreferrer noopener\" target=\"_blank\">chapter on basic TCP\/IP networking<\/a>.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"3943\">Moving containers between&nbsp;networks<\/h4>\n\n\n\n<p>You might sometimes want to move an existing container from one network to another\u200a\u2014\u200aperhaps you need to reorganize and better secure your resources. Try it out by moving that Ubuntu container to a different network, like the&nbsp;<code>newNet<\/code>&nbsp;interface we just created. Use&nbsp;<code>docker network connect<\/code>&nbsp;followed by the network name&nbsp;<code>newNet<\/code>&nbsp;and then the container name&nbsp;<code>MyContainer<\/code>.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ docker network connect newNet MyContainer<\/pre>\n\n\n\n<p>Running&nbsp;<code>inspect<\/code>&nbsp;on the container once again will show you that&nbsp;<code>MyContainer<\/code>&nbsp;is now connected to&nbsp;<strong>both<\/strong>&nbsp;the&nbsp;<code>bridge<\/code>&nbsp;interface with its&nbsp;<code>172.17.0.2<\/code>&nbsp;address,&nbsp;<strong>and<\/strong>&nbsp;the&nbsp;<code>newNet<\/code>&nbsp;interface on&nbsp;<code>172.18.0.2<\/code>. It\u2019s now like a computer with two network interface cards physically connected to separate networks.<\/p>\n\n\n\n<p>Don\u2019t believe me? You can successfully&nbsp;<code>ping<\/code>&nbsp;both interfaces from the command line, so we can see they\u2019re both active. All this was possible, by the way, despite the fact that the container was up and running all along. Don\u2019t try that on a physical machine!<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ ping 172.17.0.2<br>PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.<br>64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.103 ms<br>64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.070 ms<br>^C<br> \u2014 \u2014 172.17.0.2 ping statistics \u2014 -<br>2 packets transmitted, 2 received, 0% packet loss, time 999ms<br>rtt min\/avg\/max\/mdev = 0.070\/0.086\/0.103\/0.018 ms<br>$ ping 172.18.0.2<br>PING 172.18.0.2 (172.18.0.2) 56(84) bytes of data.<br>64 bytes from 172.18.0.2: icmp_seq=1 ttl=64 time=0.079 ms<br>64 bytes from 172.18.0.2: icmp_seq=2 ttl=64 time=0.062 ms<br>^C<br> \u2014 \u2014 172.18.0.2 ping statistics \u2014 -<br>2 packets transmitted, 2 received, 0% packet loss, time 999ms<br>rtt min\/avg\/max\/mdev = 0.062\/0.070\/0.079\/0.011 ms<\/pre>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"a17e\">Working with Dockerfiles<\/h4>\n\n\n\n<p>While containers can be defined and controlled from the command line, the process can be largely automated through scripts called&nbsp;<a href=\"https:\/\/docs.docker.com\/engine\/reference\/builder\/\" rel=\"noreferrer noopener\" target=\"_blank\">Dockerfiles<\/a>. Running&nbsp;<code>Dockerfile<\/code>&nbsp;as part of a docker build operation will tell Docker to create a container using the configurations specified by the script.<\/p>\n\n\n\n<p>In the&nbsp;<code>simple dockerfile<\/code>&nbsp;example displayed below, the&nbsp;<code>FROM<\/code>&nbsp;line tells the docker host to use Ubuntu version 16.04 as the base operating system. If there isn\u2019t already an Ubuntu 16.04 image on the local system, Docker will download one.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"># Simple Dockerfile<\/pre>\n\n\n\n<pre class=\"wp-block-preformatted\">FROM ubuntu:16.04<\/pre>\n\n\n\n<pre class=\"wp-block-preformatted\">RUN apt-get update<br>RUN apt-get install -y apache2<br>RUN echo \u201cWelcome to my web site\u201d &gt; \/var\/www\/html\/index.html<br>EXPOSE 80<\/pre>\n\n\n\n<p>Each of the&nbsp;<code>RUN<\/code>&nbsp;lines launches a command within the operating system whose results will be included in the container\u200a\u2014\u200aeven before it\u2019s actually launched as a live virtual machine.<\/p>\n\n\n\n<p>In this case,&nbsp;<code>apt-get update<\/code>&nbsp;updates the local repository indexes to permit software downloads,&nbsp;<code>apt-get install apache2<\/code>&nbsp;will download and install the Apache webserver package. The&nbsp;<code>-y<\/code>&nbsp;will automatically answer \u201cyes\u201d to any prompts included in the installation process.<\/p>\n\n\n\n<p>The&nbsp;<code>echo<\/code>&nbsp;command will replace the contents of the&nbsp;<code>index.html&nbsp;<\/code>file with my customized&nbsp;<code>Welcome<\/code>&nbsp;text.&nbsp;<code>index.html<\/code>&nbsp;is, of course, the first file a browser will look for and then load when it visits a new site.<\/p>\n\n\n\n<p>Finally,&nbsp;<code>EXPOSE 80<\/code>&nbsp;opens up port 80 on the container to allow HTTP traffic\u200a\u2014\u200anecessary because this will be a web server. This will allow us to access the web server from the Docker host machine. It\u2019ll be your responsibility to provide access to your host for any remote clients you might want to invite in.<\/p>\n\n\n\n<p>If you\u2019re up on the latest Ubuntu package management news, you\u2019ll know that there\u2019s been a shift away from&nbsp;<code>apt-get&nbsp;<\/code>to its new&nbsp;<code>apt<\/code>&nbsp;replacement. So why did I use&nbsp;<code>apt-get&nbsp;<\/code>in that&nbsp;<code>Dockerfile<\/code>? Because it\u2019s still more reliable for use in scripted settings.<\/p>\n\n\n\n<p>To actually create a container based on this Dockerfile, you&nbsp;<code>run docker build<\/code>with&nbsp;<code>-t<\/code>&nbsp;to create a name (or \u201ctag\u201d) for the container. I\u2019ll go with&nbsp;<code>webserver<\/code>. You add a space and then a dot to tell Docker to read the file named&nbsp;<code>Dockerfile<\/code>&nbsp;found in this current directory. Docker will immediately get to work building a container on top of the Ubuntu image we pulled earlier, and running the&nbsp;<code>apt-get<\/code>&nbsp;and&nbsp;<code>echo<\/code>&nbsp;commands.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ docker build -t \u201cwebserver\u201d .<br>Sending build context to Docker daemon 2.048 kB<br>Step 1\/5 : FROM ubuntu:16.04<br>16.04: Pulling from library\/ubuntu<br>Digest: sha256:e27e9d7f7f28d67aa9e2d7540bdc2b33254b452ee8e<br>60f388875e5b7d9b2b696<br>Status: Downloaded newer image for ubuntu:16.04<br> \u2014 -&gt; 0458a4468cbc<br>Step 2\/5 : RUN apt-get update<br> \u2014 -&gt; Running in c25f5462e0f2<br>[\u2026]<br>Processing triggers for systemd (229\u20134ubuntu21) \u2026<br>Processing triggers for sgml-base (1.26+nmu4ubuntu1) \u2026<br> \u2014 -&gt; 3d9f2f14150e<br>Removing intermediate container 42cd3a92d3ca<br>Step 4\/5 : RUN echo \u201cWelcome to my web site\u201d &gt; <br>\/var\/www\/html\/index.html<br> \u2014 -&gt; Running in ddf45c195467<br> \u2014 -&gt; a1d21f1ba1f6<br>Removing intermediate container ddf45c195467<br>Step 5\/5 : EXPOSE 80<br> \u2014 -&gt; Running in af639e6b1c85<br> \u2014 -&gt; 7a206b180a62<br>Removing intermediate container af639e6b1c85<br>Successfully built 7a206b180a62<\/pre>\n\n\n\n<p>If I run&nbsp;<code>docker images<\/code>, I\u2019ll now see a version of my Ubuntu image with the name&nbsp;<code>webserver<\/code>.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ docker images<br>REPOSITORY TAG IMAGE ID CREATED SIZE<br>webserver latest 7a206b180a62 3 minutes ago 250 MB<br>ubuntu 16.04 0458a4468cbc 12 days ago 112 MB<br>hello-world latest f2a91732366c 2 months ago 1.85 kB<\/pre>\n\n\n\n<p>Now we\u2019re ready to launch the container using&nbsp;<code>docker run<\/code>.<\/p>\n\n\n\n<p>Structuring this command properly is a bit of a delicate process and there\u2019s a lot that can go wrong. The&nbsp;<code>-d<\/code>&nbsp;argument tells Docker to run this container detached, meaning we won\u2019t find ourselves on the container\u2019s command line but it will be running in the background.&nbsp;<code>-p<\/code>&nbsp;tells Docker to forward any traffic coming&nbsp;<strong>on<\/strong>&nbsp;port&nbsp;<code>80<\/code>&nbsp;(the default HTTP port) through&nbsp;<strong>to<\/strong>&nbsp;port&nbsp;<code>80<\/code>&nbsp;on the container. This allows external access to the web server. I can\u2019t say that I understand why, but the order here is critical: only add the&nbsp;<code>-p<\/code>&nbsp;argument&nbsp;<strong>after<\/strong>&nbsp;<code>-d<\/code>.<br>&nbsp;<br>Next, we tell Docker the name of the container we\u2019d like to launch,&nbsp;<code>webserver<\/code>in our case. And after that, we tell Docker to run a single command once the container is running to get the Apache webserver up.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ docker run -d -p 80:80 webserver \\<br> \/usr\/sbin\/apache2ctl -D FOREGROUND<\/pre>\n\n\n\n<p>Perhaps you\u2019re wondering why I didn\u2019t use the more modern&nbsp;<code>Systemd<\/code>command&nbsp;<code>systemctl start apache<\/code>. Well I tried it, and discovered that, at this point at least, systemd is good and broken in Ubuntu Docker containers. Stay away if you know what\u2019s good for you.&nbsp;<code>-D FOREGROUND<\/code>&nbsp;ensures that Apache\u200a\u2014\u200aand the container as a whole\u200a\u2014\u200awill remain running even once the launch has completed. Run it for yourself.<\/p>\n\n\n\n<p>We\u2019re given an ID for the new container, but nothing else. You can run&nbsp;<code>docker ps<\/code>&nbsp;and you should see our&nbsp;<code>webserver<\/code>&nbsp;among the list of all running containers. You should also be able to open webserver\u2019s&nbsp;<code>index.html<\/code>&nbsp;page by pointing your browser to the container\u2019s IP address.<\/p>\n\n\n\n<p>What\u2019s that? You<strong>&nbsp;don\u2019t know&nbsp;<\/strong>your container\u2019s IP address? Well, since the container will have been associated with the default&nbsp;<code>bridge<\/code>&nbsp;network, you can use&nbsp;<code>docker network inspect bridge<\/code>&nbsp;and, within the Containers section of the output, you should find what you\u2019re after. In my case, that was&nbsp;<code>172.17.0.3<\/code>.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"01c7\">Working with Docker Hub&nbsp;images<\/h4>\n\n\n\n<p>We\u2019ve already enjoyed some of the benefits Docker Hub has to offer. The images we used to build the containers on the previous clips were all seamlessly downloaded from Docker Hub behind the scenes.<\/p>\n\n\n\n<p>In fact, using something like&nbsp;<code>docker search apache2<\/code>, you can manually comb through the repository for publicly available images that come with Apache pre-installed. You can also browse through what\u2019s available on the&nbsp;<a href=\"https:\/\/hub.docker.com\/\" rel=\"noreferrer noopener\" target=\"_blank\">Docker Hub<\/a>&nbsp;web site.<\/p>\n\n\n\n<p>However, you should remember that not all of those images are reliable or even safe. You\u2019ll want to look for results that have earned lots of review stars and, in particular, are designated as \u201cofficial.\u201d Running&nbsp;<code>docker search ubuntu<\/code>returns at least a few official images.<\/p>\n\n\n\n<p>Find something that interests you? You can add it to your local collection using d<code>ocker pull<\/code>. Once the download is complete, you can view your images using&nbsp;<code>docker images<\/code>.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ docker pull ubuntu-upstart<\/pre>\n\n\n\n<p>While you\u2019re on the Docker Hub site, take the time to create a free account. That\u2019ll allow you to store and share your own images much the way you might use a tool like GitHub. This is probably the most popular use-case for Docker, as it allows team members working remotely\u200a\u2014\u200aor lazy devs working in the same office\u200a\u2014\u200ato get instant and reliable access to the exact environments being used at every stage of a project\u2019s progress.<\/p>\n\n\n\n<p>Those are the bare-bone basics, and it\u2019s important to understand them clearly. But, because of the complexity involved in coordinating clusters of dozens or thousands of containers all at once, most serious container workloads won\u2019t use those particular command line tools.<\/p>\n\n\n\n<p>Instead, you\u2019re most likely going to want a more robust and feature-rich framework. You can read about some of those tools\u200a\u2014\u200aincluding Docker\u2019s own Docker Swarm Mode, Docker Enterprise Edition, or Docker Cloud, and Kubernetes\u200a\u2014\u200ain my article, \u201c<a href=\"https:\/\/hackernoon.com\/too-many-choices-how-to-pick-the-right-tool-to-manage-your-docker-clusters-b5b3061b84b7\" rel=\"noreferrer noopener\" target=\"_blank\">Too Many Choices: how to pick the right tool to manage your Docker clusters<\/a>\u201d.<\/p>\n\n\n\n<p><em>This article is largely based on&nbsp;<\/em><a href=\"http:\/\/pluralsight.pxf.io\/c\/1191769\/424552\/7490?subId1=solving&amp;u=https%3A%2F%2Fapp.pluralsight.com%2Fprofile%2Fauthor%2Fdavid-clinton\" rel=\"noreferrer noopener\" target=\"_blank\"><em>video courses I authored for Pluralsight<\/em><\/a><em>. I\u2019ve also got loads of Docker, AWS, and Linux content available through my&nbsp;<\/em><a href=\"https:\/\/bootstrap-it.com\/\" rel=\"noreferrer noopener\" target=\"_blank\"><em>website<\/em><\/a><em>, including links to my book,&nbsp;<\/em><a href=\"https:\/\/www.manning.com\/books\/linux-in-action?a_aid=bootstrap-it&amp;a_bid=4ca15fc9\" rel=\"noreferrer noopener\" target=\"_blank\"><em>Linux in Action<\/em><\/a><em>, and a hybrid course called&nbsp;<\/em><a href=\"https:\/\/www.manning.com\/livevideo\/linux-in-motion?a_aid=bootstrap-it&amp;a_bid=0c56986f&amp;chan=motion1\" rel=\"noreferrer noopener\" target=\"_blank\"><em>Linux in Motion<\/em><\/a><em>&nbsp;that\u2019s made up of more than two hours of video and around 40% of the text of Linux in Action.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Container virtualization\u200a\u2014\u200amost visibly represented by Docker\u200a\u2014\u200ais aserver paradigm that will likely drive enterprise computing for yearsto come. The Cloud is the most obvious and logical platform for containerdeployment. Amazon Web Services largely dominates the cloud computing world.Add it up. If&hellip; <a href=\"https:\/\/bootstrap-it.com\/blog\/?p=345\" class=\"more-link\">Continue Reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":346,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-345","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.2.1 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>An in-depth introduction to Docker on AWS - Bootstrap IT<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/bootstrap-it.com\/blog\/?p=345\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"An in-depth introduction to Docker on AWS - Bootstrap IT\" \/>\n<meta property=\"og:description\" content=\"Container virtualization\u200a\u2014\u200amost visibly represented by Docker\u200a\u2014\u200ais aserver paradigm that will likely drive enterprise computing for yearsto come. The Cloud is the most obvious and logical platform for containerdeployment. Amazon Web Services largely dominates the cloud computing world.Add it up. If&hellip; Continue Reading &rarr;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/bootstrap-it.com\/blog\/?p=345\" \/>\n<meta property=\"og:site_name\" content=\"Bootstrap IT\" \/>\n<meta property=\"article:published_time\" content=\"2019-04-12T21:08:36+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/bootstrap-it.com\/blog\/wp-content\/uploads\/docker-aws.jpeg\" \/>\n\t<meta property=\"og:image:width\" content=\"2400\" \/>\n\t<meta property=\"og:image:height\" content=\"1564\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"dbclin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@davidbclinton\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"dbclin\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"25 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/bootstrap-it.com\/blog\/?p=345\",\"url\":\"https:\/\/bootstrap-it.com\/blog\/?p=345\",\"name\":\"An in-depth introduction to Docker on AWS - Bootstrap IT\",\"isPartOf\":{\"@id\":\"https:\/\/bootstrap-it.com\/blog\/#website\"},\"datePublished\":\"2019-04-12T21:08:36+00:00\",\"dateModified\":\"2019-04-12T21:08:36+00:00\",\"author\":{\"@id\":\"https:\/\/bootstrap-it.com\/blog\/#\/schema\/person\/ae0fb1d5b3b01558b92b6426d77766ec\"},\"breadcrumb\":{\"@id\":\"https:\/\/bootstrap-it.com\/blog\/?p=345#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/bootstrap-it.com\/blog\/?p=345\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/bootstrap-it.com\/blog\/?p=345#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/bootstrap-it.com\/blog\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"An in-depth introduction to Docker on AWS\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/bootstrap-it.com\/blog\/#website\",\"url\":\"https:\/\/bootstrap-it.com\/blog\/\",\"name\":\"Bootstrap IT\",\"description\":\"Learn technology using technology\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/bootstrap-it.com\/blog\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/bootstrap-it.com\/blog\/#\/schema\/person\/ae0fb1d5b3b01558b92b6426d77766ec\",\"name\":\"dbclin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/bootstrap-it.com\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/a93785d437350478a7f1dfcbec58d26bc28e0124e405179acbe1b4325c09f90a?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/a93785d437350478a7f1dfcbec58d26bc28e0124e405179acbe1b4325c09f90a?s=96&d=mm&r=g\",\"caption\":\"dbclin\"},\"sameAs\":[\"http:\/\/bootstrap-it.com\/\",\"dbclinton\",\"https:\/\/twitter.com\/davidbclinton\"],\"url\":\"https:\/\/bootstrap-it.com\/blog\/?author=1\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"An in-depth introduction to Docker on AWS - Bootstrap IT","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/bootstrap-it.com\/blog\/?p=345","og_locale":"en_US","og_type":"article","og_title":"An in-depth introduction to Docker on AWS - Bootstrap IT","og_description":"Container virtualization\u200a\u2014\u200amost visibly represented by Docker\u200a\u2014\u200ais aserver paradigm that will likely drive enterprise computing for yearsto come. The Cloud is the most obvious and logical platform for containerdeployment. Amazon Web Services largely dominates the cloud computing world.Add it up. If&hellip; Continue Reading &rarr;","og_url":"https:\/\/bootstrap-it.com\/blog\/?p=345","og_site_name":"Bootstrap IT","article_published_time":"2019-04-12T21:08:36+00:00","og_image":[{"width":2400,"height":1564,"url":"https:\/\/bootstrap-it.com\/blog\/wp-content\/uploads\/docker-aws.jpeg","type":"image\/jpeg"}],"author":"dbclin","twitter_card":"summary_large_image","twitter_creator":"@davidbclinton","twitter_misc":{"Written by":"dbclin","Est. reading time":"25 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/bootstrap-it.com\/blog\/?p=345","url":"https:\/\/bootstrap-it.com\/blog\/?p=345","name":"An in-depth introduction to Docker on AWS - Bootstrap IT","isPartOf":{"@id":"https:\/\/bootstrap-it.com\/blog\/#website"},"datePublished":"2019-04-12T21:08:36+00:00","dateModified":"2019-04-12T21:08:36+00:00","author":{"@id":"https:\/\/bootstrap-it.com\/blog\/#\/schema\/person\/ae0fb1d5b3b01558b92b6426d77766ec"},"breadcrumb":{"@id":"https:\/\/bootstrap-it.com\/blog\/?p=345#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/bootstrap-it.com\/blog\/?p=345"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/bootstrap-it.com\/blog\/?p=345#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/bootstrap-it.com\/blog"},{"@type":"ListItem","position":2,"name":"An in-depth introduction to Docker on AWS"}]},{"@type":"WebSite","@id":"https:\/\/bootstrap-it.com\/blog\/#website","url":"https:\/\/bootstrap-it.com\/blog\/","name":"Bootstrap IT","description":"Learn technology using technology","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/bootstrap-it.com\/blog\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/bootstrap-it.com\/blog\/#\/schema\/person\/ae0fb1d5b3b01558b92b6426d77766ec","name":"dbclin","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/bootstrap-it.com\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/a93785d437350478a7f1dfcbec58d26bc28e0124e405179acbe1b4325c09f90a?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a93785d437350478a7f1dfcbec58d26bc28e0124e405179acbe1b4325c09f90a?s=96&d=mm&r=g","caption":"dbclin"},"sameAs":["http:\/\/bootstrap-it.com\/","dbclinton","https:\/\/twitter.com\/davidbclinton"],"url":"https:\/\/bootstrap-it.com\/blog\/?author=1"}]}},"_links":{"self":[{"href":"https:\/\/bootstrap-it.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/345","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/bootstrap-it.com\/blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/bootstrap-it.com\/blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/bootstrap-it.com\/blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/bootstrap-it.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=345"}],"version-history":[{"count":1,"href":"https:\/\/bootstrap-it.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/345\/revisions"}],"predecessor-version":[{"id":347,"href":"https:\/\/bootstrap-it.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/345\/revisions\/347"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/bootstrap-it.com\/blog\/index.php?rest_route=\/wp\/v2\/media\/346"}],"wp:attachment":[{"href":"https:\/\/bootstrap-it.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=345"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/bootstrap-it.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=345"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/bootstrap-it.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=345"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}