Cloud Native 5 Minutes at a Time: Container Networking and Opening Container Ports
One of the biggest challenges for implementing cloud native technologies is learning the fundamentals—especially when you need to fit your learning into a busy schedule.
In this series, we’ll break down core cloud native concepts, challenges, and best practices into short, manageable exercises and explainers, so you can learn five minutes at a time. These lessons assume a basic familiarity with the Linux command line and a Unix-like operating system—beyond that, you don’t need any special preparation to get started.
Last time, we containerized an application with persistent storage, giving us an essential ingredient for building complex applications. Now that we can create more powerful and complicated apps inside our containers, it’s time to explore how we can make those apps accessible to the outside world.
Table of Contents
Container Networking and Opening Container Ports←You are here
Isolated but accessible
By design, containers are systems of isolation—but we typically don’t want the functionality of an application to be walled off. We want our apps to be isolated but accessible to outside requests.
So far, when we’ve wished to interact with the contents of a container, we’ve either started an interactive shell session to work inside the container itself, or we’ve observed output that Docker has passed from the container to the terminal. Unfortunately, these methods won’t be very helpful for web applications with graphical user interfaces (GUIs) accessed through the web browser.
Besides, we’d like to do more than send information to the host machine—we want our apps to be able to interact with the outside world! That means we need to understand container networking.
What is container networking? Fundamentals explained.
Container networking is the system by which containers are assigned unique addresses and routes through which they may send and receive information. Containerized applications may need to communicate with…
One another (container-to-container)
The host machine
Requests from outside the host machine
In each case, many of the fundamental concepts are the same. First, Docker assigns each container an Internet Protocol (IP) address—if we want to find the IP address for a container, we can use the command:
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
With the inspect
command, we’re asking Docker for information—stored in a JavaScript Object Notation (JSON) array—about a container instance.
The --format
argument helps us specify particular details we would like to retrieve: in this case, the IP address. By default, Docker starts out using a range of addresses beginning 172.17.0… If you inspect a running container, you will likely find an address such as:
172.17.0.2
Containers’ IP addresses are created on a local subnet. That means initially, the assigned IP addresses will only “make sense” to one another: you can’t access them—using those addresses, at least—from another machine or even the host machine.
We can take a high-level look at the Docker network environment using the docker network ls
command. Our output will look something like this:
NETWORK ID NAME DRIVER SCOPE
099f55813274 bridge bridge local
e9c31cf63c20 host host local
eb6d55a56ee3 none null local
We should find multiple networks here. The host network and none network are part of the Docker network stack—machinery that makes Docker run, but that we won’t interact with directly. The bridge network, however, is where the action happens: this is the container network where our container IP addresses are located by default. The bridge network—sometimes called the docker0 bridge—is configurable and, most importantly for our purposes, it’s the container network where our applications live.
Wait—I want to create my own container network!
Docker enables you to create highly configurable networks for a range of use cases with the docker network create
command—and indeed, user-defined networks are an essential tool. The default bridge network disables Domain Name System (DNS) service discovery—meaning containers on this network have to communicate with one another by their specific IP addresses rather than names. That has big implications for scalability. For the purposes of this lesson, we’ll be staying on the default bridge, but user-defined networks are the preferred method for connecting multi-container apps, and we’ll be exploring them shortly.
Web services running on a given container will send and receive information through a particular port inside the container. These ports—just like naval ports—are places where journeys begin and end. Ports are defined by a series of numbers appended (after a colon) to an IP address. The designation below would refer to port 8000 for a particular IP address:
172.17.0.2:8000
Now let’s try observing this in practice—and taking it a step further.
Exercise: Port mapping
Let’s create a new container based on Docker Hub’s official image for the nginx web server:
docker run --name nginx-test -d nginx
The -d
argument means we’re running this container in “detached mode”: the process is detached from our terminal session and running in the background. We can verify this with…
docker container ls
…which should return something like this:
CONTAINER ID IMAGE ... PORTS NAMES
nginx ... 80/tcp nginx-test
Note the port: nginx is running on port 80 within the container. Now, if an application was running on port 80 on our host machine, we could access that by navigating to localhost:80 in our web browser. Let’s try that now.
Hmm. Well, what if we look up the IP address of the container and try to access it that way?
docker container inspect --format '{{ .NetworkSettings.IPAddress }}' nginx-test 172.17.0.2
Your browser will try to load the address, but to no result.
All right, let’s stop the container, which is currently still running in the background, then delete the container so we can start from scratch.
docker stop nginx-test
docker rm nginx-test
What’s the problem here?
We aren’t able to access the port because the container’s network address is still isolated from the host machine. Fortunately, we can bridge the gap by “publishing” the port. (Sometimes people refer to this as “port mapping” or “port binding.”) If you’re familiar with the way a virtual machine can connect to external networks through virtual ports, a similar idea is in play here.
We’ll make our nginx container accessible from the host machine by connecting the container port to a port on the host machine. Docker provides a powerful range of options here, but for the time being, we’ll keep things simple and connect port 8000 on our host machine to port 80 on the nginx-test container.
docker run --name nginx-test -d -p 8000:80 nginx
The -p argument helps us specify that we want to use the host machine’s port 8000 (on the left side of the colon) to access the container’s port 80 (on the right). The syntax here might remind you a bit of how we connect volumes to directories within a container.
After running the command above, we can test whether it worked by navigating to localhost:8000:
Success! Now we can access the containerized application on our host machine—and from here, we could serve it to the outside world with the right configuration. In other words, we could take the app to production—a big step with important security implications, so we’ll save it for a future lesson.
In the meantime, stop (and if you wish, remove) the container we created today.
docker stop nginx-test
docker rm nginx-test
Next time, we’ll combine what we’ve learned so far to run a complex web application with persistent volumes and published ports.