Background

This is the penultimate post in a series where I’m looking at using Locust with Docker to do some load testing against an Apache HTTP Server web server.

Posts in this Series

Part 3 - Running Docker Containers With Their Own IP Addresses (macvlan)

Running a Docker Container on a Different IP to the Host

A key part of this piece of work is being able to run multiple Locust Docker containers on the same host and have each of them firing requests into the Apache HTTP Server from different IP addresses. To do that, we need to in effect give each container its own IP address.

It is possible to add a virtual IP to the host, and then bind container ports to that virtual IP. That means incoming traffic can be routed to the container via the virtual IP, but any traffic leaving the container will still be routed via the hosts routed IP address. In many scenarios using Docker containers, this would be absolutely fine. I started to research the issue and there were suggestions around using iptables to route the outgoing traffic from the container. I didn’t like this, because I didn’t want to create and maintain an iptables configuration. My preference when building a contraption is at least to keep all of the configuration for that contraption in one place. I did see it was possible to setup a container in such a way that starting it would create the iptables rules but I figured there had to be a better way.

I sat down and read the Docker documentation in respect to networking, and stumbled upon macvlan networks. Attaching containers to a macvlan network allows that container to appear to the network as if it has its own physical interface because Docker will assign an automatically generated MAC address to each one of them. I feel Docker’s macvlan network is closer to a Linux network bridge in the way that it works, then a Docker bridge network.

Creating and Using a macvlan Network

I created a macvlan network with the following command:

sudo docker network create --driver macvlan \
    --subnet=172.16.0.0/21 \
    --gateway=172.16.0.1 \
    --ip-range=172.16.5.0/24 \
    --opt parent=eno1 locustnet

--driver specifies the driver type. The default is bridge.

--subnet is the physical subnet that the macvlan network is attached to.

--gateway is the router for the network segment that will handle traffic destined for other networks. This is important if you are relying on a router or a firewall or even a VPN to route traffic too and from your containers.

--ip-range is the select of IPs that can be allocated to docker containers. It is important to set this so you do not end up with IP collisions on a network. It is worth noting that the network mask for --subnet and --gateway are given in CIDR notation

--opt means additional options and we use that to specify the physical adapter to which the bridge is attached. You will need to change this to match the name of the adapter on your host. If you are using Linux, you can use ip link show to list the physical adapters on the host. Using legacy adapter names, it could be eth0 or eth1.

Finally, we specify the name of the network. In this case, locustnet.

Test the Locust Container with the macvlan Docker Network

With the mavclan network created, we can create a container attached to the network. This is very similar to the command we used to run the container in part 2, save for the addition of one extra parameter:

sudo docker run --rm --name standalone --hostname standalone \
    -v /home/user/dockerLoadTesting:/locust \
    --net locustnet --ip 172.16.5.1 -p 8089:8089 \
    -e targetHost="https://somebox.somedomain.sometld" \
    -d debian:locust

In this version of the Docker run command, we use --net to tell Docker that this container should be attached to the network that we created in the prior step. In addition to that we also tell Docker what IP address this container should have on that network with the --ip parameter.

That’s it!

With the Docker container started, you should be able to access the running standalone instance via http://172.16.5.1:8089.

In part 4 we pull everything together with Docker Compose.