Container networking basics

One of Clarke’s laws says that any sufficiently advanced technology is indistinguishable from magic; and with container technologies I find that to be true more often than not.

Container networking might be one of those areas where magic seems to happen all the time. As you might know, the problem with magic is that you can’t troubleshoot it, so if something fails you are doomed to go through a costly and stressful process of trial and error.

One of the most valued skills in a technical position is the ability to slice a big, hairy, seemingly magical, logic-defying problem in tiny pieces and solve them one at a time. This process builds a knowledge foundation that eventually gives you a full understanding of a product, scenario or technology. The good news is that anyone can do it.

In this article we’re going to take container networking apart into tiny pieces and look at each of them individually. This will help me learn more about container networking as I’ll have to do research to write the article, but hopefully it will help you too!

We have established that, for some of us, container networking feels magical. This is because things just work when the container needs outbound connectivity and there’s little to do (expose ports) for inbound connectivity needs. This is true on a default Docker installation, but what does a default Docker installation entail in the networking side of things?

Docker network isolation

Containers have their own networking namespace, meaning their network resources are isolated from other containers or the host where it runs. This implies a container can’t (shouldn’t be able to) communicate with any resources outside itself, so we need some magic to change that while keeping a certain degree of isolation. This is where network drivers and network driver plugins come into play.

The challenge for Docker is to offer network capabilities equivalent to those of a VM or bare metal servers, covering also scenarios that didn’t exist until now like container to container communication either in the same or in different hosts.

Docker network drivers

Docker Engine supports different networking drivers that are meant to provide the containers with network functionality and a certain degree of communication with the outside world. These drivers as we speak are bridgehostoverlay, macvlan and none. By default your containers in a Docker installation will use the bridge driver.

Bridge driver

The bridge driver creates an isolated L3 network for the container and bridges it with the host’s network. It accomplishes that by doing two things:

Outbound connections will have its source IP address translated to the host’s interface IP address, potentially changing the source port to an available one.

Inbound connections should be pointed to the host’s interface IP address on one of the exposed ports. If the destination port is not one of the exposed ports, the connection will not be handed over to Docker, meaning it will not be sent to the container. If the destination port is one of the exposed ports, the traffic will be sent to the container with the destination IP translated from the host’s interface IP address to the container’s interface IP address.

The isolated L3 network created ad-hoc can be used by multiple containers in the same Docker host, thus effectively keeping communications between them (e.g. WebApp container->SQL container) private inside the isolated L3 network. These communications don’t need the user to expose any ports, as containers have full network access to each other inside the isolated L3 network.

This is the default driver as it’s the easiest one to use. It just works for outbound connectivity and requires little configuration for inbound. Another advantage of the bridge driver is that it is fairly clean to run several containers and several isolated L3 networks on the same host.

Host driver

The host driver pretty much removes the network isolation, so the container uses the host’s network resources as any other process running on the host. It would be no different -in networking terms- than running the application in the host outside a container.

Multiple containers on the same host won’t talk to each other using a private or isolated network. They will communicate just as any other processes running on the host do.

Overlay driver

The overlay driver encapsulates container traffic inside UDP or, in other words, it uses UDP tunnels to connect different containers, usually running on different hosts.

As we’ve seen with the bridge driver, the private network only exists inside the host. That is a problem for highly available solutions where you want to have multiple copies of a service on different hosts not just for performance but also for availability reasons.

The main reason then to use the overlay driver is to extend your container’s private network to other Docker hosts. This requires of an external component to act as some sort of IPAM/metadata server so containers know how to reach other containers in different hosts. This server is known as a swarm manager and can be any of your Docker hosts.

Overlay networks aren’t encrypted by default, so treat the traffic as not private unless you enable encryption.

Macvlan driver

The Macvlan driver creates a new interface in the host with its own MAC address, thus appearing directly in the network as a real interface. This way your container has direct access to the same network where the host is.  For example, if the host is a bare metal server, your container will then be connected to the physical network.

The Macvlan driver could be used as a bridge, where you get a separate network device in the host, or as a 802.1q trunk, where you get a subinterface in an existing network device in the host.

It makes sense to use the Macvlan driver when there are specific networking needs, like having the containers in the same network as the host, but not sharing the host’s IP address.

None driver

The None driver just disables networking. Poof! Gone! Nothing! Zero! Nada!

It is used when you plan on using a 3rd party network driver.

Docker network driver plugins

Docker Engine has made their network drivers extensible, meaning anyone can extend its functionality with plugins. These plugins can provide support for different protocols, thus making container networking really flexible.

Some popular network plugins are Cilium or Weave.

Life after Docker

This article is focused on Docker, but there is life after it. Docker’s networking model is known as CNM or Container Networking Model, while other vendors like CoreOS have chosen to go with a different approach known as CNI or Container Network Interface. Kubernetes and a number of other projects have chosen to mainly support CNI, which means I’ll probably write about CNI in the future.

Meanwhile, The New Stack has a good article about the differences between CNM and CNI: https://thenewstack.io/container-networking-landscape-cni-coreos-cnm-docker/

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.