A question that crops up regularly on #docker is "How do I attach a container directly to my local network?" One possible answer to that question is the macvlan network type, which lets you create "clones" of a physical interface on your host and use that to attach containers directly to your local network. For the most part it works great, but it does come with some minor caveats and limitations. I would like to explore those here.

For the purpose of this example, let's say we have a host interface eno1 that looks like this:

2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 64:00:6a:7d:06:1a brd ff:ff:ff:ff:ff:ff
    inet brd scope global dynamic eno1
       valid_lft 73303sec preferred_lft 73303sec
    inet6 fe80::b2c9:3793:303:2a55/64 scope link 
       valid_lft forever preferred_lft forever

To create a macvlan network named mynet attached to that interface, you might run something like this:

docker network create -d macvlan -o parent=eno1 \
  --subnet \
  --gateway \

...but don't do that.

Address assignment

When you create a container attached to your macvlan network, Docker will select an address from the subnet range and assign it to your container. This leads to the potential for conflicts: if Docker picks an address that has already been assigned to another host on your network, you have a problem!

You can avoid this by reserving a portion of the subnet range for use by Docker. There are two parts to this solution:

  • You must configure any DHCP service on your network such that it will not assign addresses in a given range.

  • You must tell Docker about that reserved range of addresses.

How you accomplish the former depends entirely on your local network infrastructure and is beyond the scope of this document. The latter task is accomplished with the --ip-range option to docker network create.

On my local network, my DHCP server will not assign any addresses above I have decided to assign to Docker the subset, which is a range of 32 address starting at and ending at The corresponding docker network create command would be:

docker network create -d macvlan -o parent=eno1 \
  --subnet \
  --gateway \
  --ip-range \

Now it is possible to create containers attached to my local network without worrying about the possibility of ip address conflicts.

Host access

With a container attached to a macvlan network, you will find that while it can contact other systems on your local network without a problem, the container will not be able to connect to your host (and your host will not be able to connect to your container). This is a limitation of macvlan interfaces: without special support from a network switch, your host is unable to send packets to its own macvlan interfaces.

Fortunately, there is a workaround for this problem: you can create another macvlan interface on your host, and use that to communicate with containers on the macvlan network.

First, I'm going to reserve an address from our network range for use by the host interface by using the --aux-address option to docker network create. That makes our final command line look like:

docker network create -d macvlan -o parent=eno1 \
  --subnet \
  --gateway \
  --ip-range \
  --aux-address 'host=' \

This will prevent Docker from assigning that address to a container.

Next, we create a new macvlan interface on the host. You can call it whatever you want, but I'm calling this one mynet-shim:

ip link add mynet-shim link eno1 type macvlan  mode bridge

Now we need to configure the interface with the address we reserved and bring it up:

ip addr add dev mynet-shim
ip link set mynet-shim up

The last thing we need to do is to tell our host to use that interface when communicating with the containers. This is relatively easy because we have restricted our containers to a particular CIDR subset of the local network; we just add a route to that range like this:

ip route add dev mynet-shim

With that route in place, your host will automatically use ths mynet-shim interface when communicating with containers on the mynet network.

Note that the interface and routing configuration presented here is not persistent -- you will lose if if you were to reboot your host. How to make it persistent is distribution dependent.

Ansible 2.0: The Docker connection driver

As the release of Ansible 2.0 draws closer, I'd like to take a look at some of the new features that are coming down the pipe. In this post, we'll look at the docker connection driver.

A "connection driver" is the mechanism by which Ansible connects to your target …

read more

Running NTP in a Container

Fri 09 October 2015 by Lars Kellogg-Stedman Tags docker atomic

Someone asked on IRC about running ntpd in a container on Atomic, so I've put together a small example. We'll start with a very simple Dockerfile:

FROM alpine
RUN apk update
RUN apk add openntpd

I'm using the alpine image as my starting point because it's very small …

read more

Heat-kubernetes Demo with Autoscaling

Next week is the Red Hat Summit in Boston, and I'll be taking part in a Project Atomic presentation in which I will discuss various (well, two) options for deploying Atomic into an OpenStack environment, focusing on my heat-kubernetes templates.

As part of that presentation, I've put together a short …

read more

Suggestions for the Docker MAINTAINER directive

Mon 27 April 2015 by Lars Kellogg-Stedman Tags docker

Because nobody asked for it, this is my opinion on the use of the MAINTAINER directive in your Dockerfiles.

The documentation says simply:

The MAINTAINER instruction allows you to set the Author field of the generated images.

Many people end up putting the name and email address of an actual …

read more