Measuring Maximum Network Bandwidth in a Raspberry Pi Cluster

Iperf and Ansible make this too easy :)

With a cluster of 6 Raspberry Pi 3Bs connected via a basic 100 Mbps Ethernet switch, I was curious about the maximum bandwidth that could be achieved between the Pis using the builtin Ethernet support.

Scenarios

I wanted to consider two extreme scenarios where the communicating processes on the Pis can use up all available bandwidth.

  1. Best Case Scenario where every Pi communicates exclusively with another Pi and these communications happen in parallel. In this case, I expected the bandwidth on my cluster to peak close to 100 Mbps.
  2. Worst Case Scenario where a Pi is designated as master and every non-master Pi communicates with the master Pi in parallel. In this case, I expected the bandwidth on my cluster to peak close to 20 Mbps.

Tools

I used iPerf 2.0 and Ansible 2.5.1 for these measurements.

iPerf enables measurement of bandwidth on IP networks. In basic server mode, it accepts incoming connections and provides bandwidth information for each connection. In basic client mode, it connects to a iPerf server and provides bandwidth information for that connection.

Ansible enables orchestration of tasks on a network of nodes. It connects to a given set of nodes via SSH (specified via YAML-based inventory) and executes a given set of commands specified via YAML-based playbooks.

Experiment

To use Ansible for automation, I configured a Linux box and the Pis to allow logging into the Pis from the Linux box via SSH using SSH keys.

For the best case scenario, my plan was to

  1. Execute iperf in server mode on three Pis (192.168.2.10/11/12 in my cluster). I did this by logging into the Pis and executing iperf -s.
  2. Execute iperf in client mode on three other Pis (192.168.2.13/14/15) and have it communicate with specific Pis (192.168.2.10/11/12, respectively) for 20 seconds. I did this by executing ansible-playbook -i rasppis-best-case.yml iperf-client-best-case.yml.

As expected, the bandwidth in this case peaked to 94.1 Mbps.

For the worst case scenario, my plan was to

  1. Execute iperf in server mode on the master Pi (192.168.2.10 in my cluster). I did this by logging into the Pis and executing iperf -s.
  2. Execute iperf in client mode on the other Pis (192.168.2.11–15) and have it communicate with the master Pi for 20 seconds. I did this by executing ansible-playbook -i rasppis-worst-case.yml iperf-client-worst-case.yml

As expected, the bandwidth in this case peaked to 19 Mbps.

Closing Thoughts

The above experiment was possible in about two hours while I was looking up information about Ansible and Iperf, creating scripts, running them, posting them on GitHub, and typing up this blog post :)

In my first outing with Ansible to update packages on the Pis and to power off the Pis, I was impressed by how Ansible makes basic orchestration easy. As I used more interesting features of Ansible (e.g., parallel execution of plays using free strategy) in this exercise, I was further impressed by how Ansible makes even involved orchestration easy. While one has to dig around the documentation a bit, the information is available. More important, it has simple yet powerful features to support interesting orchestration scenarios.

At this time, Ansible is my go-to tool for orchestration tasks :)

Written by

Programming, experimenting, writing | Past: SWE, Researcher, Professor | Present: SWE

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store