Monday, 20 October 2025

ROS2 with Docker Compose on a Raspberry Pi 5

(This article is now updated to work with Pi OS on Wayland)

Continuing our learning of ROS2 running in Docker containers on top of Raspberry Pi OS (see previous post in this blog). The management of the Docker containers can be further simplified with the Docker Compose tool. If you followed the Docker installation instructions in my previous post then you already have Docker Compose installed. So you can get straight on with using it.
Docker Compose allows us to manage multiple containers with a single configuration file. A Docker network is automatically created so the containers can communicate with each other. The tool works from a yaml file named docker-compose.yml which contains the complete configuration of each container.

In the previous post, we created our container with a command line containing several parameters. These can be put into the docker-compose.yml file to simplify the process.
Start with an empty directory for your ros docker compose project. and create a new file named docker-compose.yml in this directory.

If you are using an older Pi OS image using X11, add the following contents to your docker-compose.yml file:

services:

    sim:

      image: ros_docker:latest

      environment:

        DISPLAY:

        QT_X11_NO_MITSHM: 1

      volumes:

        - /tmp/.X11-unix:/tmp/.X11-unix:rw

      command: ros2 run turtlesim turtlesim_node


Unfortunately this no longer works since Pi OS updated to use the Wayland display server. The configuration with Wayland requires us to go a little deeper into docker compose and user permissions configuration.

ROS in Docker under Wayland

I created my project in a folder named 'wayland-jazzy'. Docker Compose will use the folder name to generate docker image and container names, so you can tell which project they belong to. We need some environment variables defining for our Pi user. Rather than hard coding these, we will fetch them from our actual logged in user. From a terminal prompt in your project folder, run the following commands:

echo "LOCAL_UID=$(id -u)" > .env
echo "LOCAL_GID=$(id -g)" >> .env

You should now have a file named .env containing the user and group ids from your machine. Check the contents of this:

cat .env

My IDs were both 1000, but yours may be different. We can now pull these into our docker image so that it can set up the required user and groups. This means we need a project Dockerfile. Start by copying the Dockerfile from the ros_docker repo we checked out in the previous tutorial. The jazzy desktop on noble image for example is in the folder:
~/docker_images/ros/jazzy/ubuntu/noble/desktop
We are going to customise this Dockerfile to add a missing group (group name: render), add the ubuntu user to this group and specify that we want to run the contain using the user ubuntu. But first we need to get the group id for the render group from our host machine. Run the following command:
getent group render

This should display the group information, including an id. Mine returned:
render:x:105:footleg,vnc

This is showing that the render group has id 105 on my host, and the users in the group. What we need is that id. Open the Dockerfile you copied into your project folder, and add the following (but use the render group id you got instead of 105 if your id was different):

RUN groupadd -g 105 render || true \

 && groupadd -g 997 input || true \

 && usermod -aG render,input ubuntu


USER ubuntu


This entire Dockerfile contents in your project folder should be something like this (the jazzy version may have changed since this was written, so match what you got from your checkout of the docker_images repo):

FROM ros:jazzy-ros-base-noble # install ros2 packages RUN apt-get update && apt-get install -y --no-install-recommends \ ros-jazzy-desktop=0.11.0-1* \ && rm -rf /var/lib/apt/lists/* RUN groupadd -g 105 render || true \ && usermod -aG render ubuntu USER ubuntu

For Wayland we are also going to build our docker image using compose. So rather than referencing an existing image name (like we did in the older x11 version above), we will add a build section to build the image using our local Dockerfile. We also need a user section which reads our local environment variables from our .env file, and we need to add the group 'render'. The rest of the configuration is brought over from our Wayland container creation command line used in the previous tutorial. Here is the complete docker-compose.yml file for Wayland:

services: sim: build: context: . args: LOCAL_UID: ${LOCAL_UID} LOCAL_GID: ${LOCAL_GID} user: "${LOCAL_UID}:${LOCAL_GID}" group_add: - render environment: - DISPLAY=${DISPLAY} - WAYLAND_DISPLAY=${WAYLAND_DISPLAY} - XDG_RUNTIME_DIR=${XDG_RUNTIME_DIR} volumes: - /tmp/.X11-unix:/tmp/.X11-unix - ${XDG_RUNTIME_DIR}/wayland-0:${XDG_RUNTIME_DIR}/wayland-0 devices: - /dev/dri:/dev/dri command: ros2 run turtlesim turtlesim_node

Before we can create a container, we need to build the image. This will take a few minutes. Run the following command and then go an have a well deserved hot drink of your choice while it runs:

docker compose build

Whether you were using the older x11 compose file, or the newer Wayland one, from here on we should be back on track to create a working container which can display graphical applications on the host display.
The docker-compose.yml file contains a single container definition 'sim'. To create the container run the command:

docker compose up -d


This will create a new container, start it up and run the command to launch the turtlesim_node in the turtlesim package. The '-d' option runs the commands in the background, returning the terminal prompt to you (otherwise the terminal window is tied up until the node is terminated).

You should see the TurtleSim window open. If it did not open, check that you have granted docker permission to access xhost since you last rebooted the Pi?
The command is: xhost +local:docker

Run the command docker ps to see what containers are running. You should see a new container was created named 'wayland-jazzy-sim-1'. This time instead of being randomly generated the container name has been taken from the folder name, and service name used. As it is the first instance, it is appended with '-1'.

If you close the TurtleSim window, the container will exit as the node it was running has terminated. If you run the 'docker compose up -d' command again it will restart the existing container, rather than creating a new one each time, compared to the 'docker run' command we used previously.

We can add more services to run additional containers and launch them as a group. We can use the image already built for our sim service for the other containers, so instead of copying the build configurations, we will refer to the existing image. List your docker images to find the name of the project image:
docker image ls

See the image name is taken from the project folder name, and the service it was build from:
REPOSITORY TAG IMAGE ID CREATED SIZE wayland-jazzy-sim latest dd5c7d36625b 18 minutes ago 3.84GB

We can see that the docker image is 3.84GB is size. Docker is clever enough to manage images and share contents between them, so if we create more images from the same source images, most of the disk space will only be used once across all the images for the jazzy-ros-base-noble image references in the Dockerfile and the desktop packages added by the first RUN command in that file. We will look at docker disk space in more details at the end of this tutorial. For now, we just needed the name of our image. We can use this to add a second service 'dev-build' to our docker-compose.yml file:


services:
sim:
build:
context: .
args:
LOCAL_UID: ${LOCAL_UID}
LOCAL_GID: ${LOCAL_GID}
user: "${LOCAL_UID}:${LOCAL_GID}"
group_add:
- render
environment:
- DISPLAY=${DISPLAY}
- WAYLAND_DISPLAY=${WAYLAND_DISPLAY}
- XDG_RUNTIME_DIR=${XDG_RUNTIME_DIR}
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix
- ${XDG_RUNTIME_DIR}/wayland-0:${XDG_RUNTIME_DIR}/wayland-0
devices:
- /dev/dri:/dev/dri
command: ros2 run turtlesim turtlesim_node

dev-build:
image: wayland-jazzy-sim
environment:
- DISPLAY=${DISPLAY}
- WAYLAND_DISPLAY=${WAYLAND_DISPLAY}
- XDG_RUNTIME_DIR=${XDG_RUNTIME_DIR}
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix
- ${XDG_RUNTIME_DIR}/wayland-0:${XDG_RUNTIME_DIR}/wayland-0
devices:
- /dev/dri:/dev/dri
command: rqt_graph


This second service uses the existing image, and runs the 'rqt_graph' tool. Now when you run 'docker compose up -d' you should see you have 2 windows appear, and 2 containers listed when you run 'docker ps'. If you close one of the tool windows (stopping its container) and run 'docker compose up -d' again, it will detect that one of the containers is already running, and just launch the one which is stopped. As these containers were launched from the same docker-compose file, they are automatically in the same docker network. So any nodes run in the dev-build container can control the TurtleSim node as in the ROS Tutorials.


As before, you can open an interactive shell in these containers, using the exec command:

docker exec -it wayland-jazzy-dev-build-1 bash


This allows us to set up the shell to source the ROS packages as before, by running the following command from the shell:

echo "source /opt/ros/jazzy/setup.bash" >> ~/.bashrc


Now exit the shell, and use docker exec to start a new one. You should now be able to run ROS2 commands from the shell. The rqt_graph window can show active docker nodes and their connections. Click the refresh icon at the top left in this windows and it should show the single /turtlesim node.

Run the command: ros2 run turtlesim turtle_teleop_key
Now refresh the rqt_graph again. Now it should show a pair of nodes with connecting arrows for control, status and feedback. You should be able to drive the turtle in the sim window, when the host terminal running the turtle_teleop_key node has focus.

Interestingly, it was not necessary to source the setup.bash script in the docker-compose.yml file to run the ROS2 commands directly there. I am not clear why it works, but it does so I'm happy with it. You can stop the contains by simply closing the windows when you are done playing.

Access to the file system of the Host

Now we can use Docker Compose to manage our containers, we can take a look at mapping the host Pi file system inside our containers. Following the Client Libraries tutorial at: https://docs.ros.org/en/jazzy/Tutorials/Beginner-Client-Libraries/Colcon-Tutorial.html we need to create a folder on our host Pi called 'ros2_ws'. I created this folder in my home directory. Next add the path to this folder on the host, to the path '/ros2_ws' in the dev-build container, by adding it to the volumes section in the docker-compose.yml file as follows:


dev-build:
image: wayland-jazzy-sim
environment:
- DISPLAY=${DISPLAY}
- WAYLAND_DISPLAY=${WAYLAND_DISPLAY}
- XDG_RUNTIME_DIR=${XDG_RUNTIME_DIR}
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix
- ${XDG_RUNTIME_DIR}/wayland-0:${XDG_RUNTIME_DIR}/wayland-0

      - ~/ros2_ws:/ros2_ws

devices:
- /dev/dri:/dev/dri
command: rqt_graph


Stop the container (by closing the rqt tool window) and start it again by running the 'docker compose up -d' command. Now if you open a shell in the dev-build container, you should see a folder /ros2_ws which will be mapped to the folder ~/ros2_ws on the host. This should enable you to run the package building tutorials with the files being saved outside your container. So if your container is ever destroyed, the files are not lost when a new container is created by Docker Compose. Note that as we have changed our docker-compose.yml file, Docker Compose will detect the change and destroy the old container and recreate it. So once it starts up again, we will need to source our ROS2 packages again (just once if we put the command into the bash.rc file again).

In the next post, we will look at using a game controller in ROS in a Docker container.

Sunday, 19 October 2025

Running ROS2 on a Raspberry Pi 5

(Updated in October 2025 for the updated Raspberry Pi OS which uses the Wayland display server in place of the older X11 protocol)

ROS2 on the Raspberry Pi requires the 64bit version of Pi OS Bookworm, and docker to run ROS. I found the tutorials and documentation assumed a lot of knowledge of both Docker and ROS so it was a challenge to get started. ROS on Linux is not natively supported on Debian (which the Raspberry Pi OS is based on). ROS runs on Ubuntu, so you could install that OS on the Pi, but many users want to run the Raspberry Pi OS to benefit from the support for hardware and libraries provided for Pi projects. So we need to run ROS inside an Ubuntu docker container. Most of the guides either tell you a series of commands to run which gets you through, but feeling like you don’t understand much of what is going on, while the documentation offers you endless choices or decisions to make, leaving it difficult to know which way to turn at each step. This guide is my attempt to lead you through this forest, and get you to the other side empowered with some understanding of the path we took and how things work. Links to the source documentation are provided at each stage, and these should be followed alongside this guide. We will start with a vanilla installation of Raspberry Pi OS Bookworm. Make sure you are using the 64bit version.

Installing Docker

Install docker on Debian bookworm following the instructions at https://docs.docker.com/engine/install/debian/ 


I used the method in the section: Install using the apt repository


# Add Docker's official GPG key:

sudo apt-get update

sudo apt-get install ca-certificates curl

sudo install -m 0755 -d /etc/apt/keyrings

sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc

sudo chmod a+r /etc/apt/keyrings/docker.asc


# Add the repository to Apt sources:

echo \

  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \

  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \

  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get update


sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin


Next follow the post install steps:

https://docs.docker.com/engine/install/linux-postinstall/ 


Create the docker group.

sudo groupadd docker


Add your user to the docker group.

sudo usermod -aG docker $USER


Log out and log back in so that your group membership is re-evaluated.


Now you should be able to run docker commands without needing to use sudo.


Test docker is working:

docker run hello-world


This should show a success message.


Building a ROS Docker Container

Following the documentation at https://docs.ros.org/en/jazzy/How-To-Guides/Installing-on-Raspberry-Pi.html 


I built a docker image myself, in order to get the desktop variant required to run the tutorials. First clone the ROS  docker images git repo from your home directory:

git clone https://github.com/osrf/docker_images


The docs then say to navigate to the folder for the release you want and build the container. But which folder to choose? I went with the ‘Jazzy Jalisco’ version of ROS2, as the latest stable supported release so it has the longest support lifetime. You also need to choose a version of Ubuntu to run it on. I chose noble as the only option for Jazzy docker files for all the ROS distributions. We are going to build the desktop version, as this has all the tools we need for the tutorials. But the container image will be around 2.4GB in size, so make sure you have enough free disk space. The ROS tutorials should work with other supported distributions, so you could choose the rolling/jammy option if you want to try the latest development build. But while I found this worked for the ROS CLI tools tutorials, it failed to build all the examples in the Client libraries tutorial. Also the Rolling desktop docker image was 3.3GB in size, and used python 3.10 whereas the jazzy image used python 3.12. So I have switched back to Jazzy for now.


Navigate to the folder ~/docker_images/ros/jazzy/ubuntu/noble/desktop

Then run the command:

docker build -t ros_docker .


Note the ‘.’ on the end of the command, it is important, as it tells docker to use the docker file in the folder you are running the command in.


This creates a docker container image, not a container. It will take a while (a go and make a coffee amount of time). When it completes, you should be able to see a docker image named ros_docker if you run the command:

docker image ls


Now we can create a docker container from our image:

docker run -it ros_docker


This will create a container from the image and open an interactive shell in the container, so now the command prompt is open in a shell inside the container. 


At this point we have a ROS container. Open a new terminal window on the Pi and type:

docker ps


You should see your container listed as a running container.


We need to configure the environment for ROS. https://docs.ros.org/en/jazzy/Tutorials/Beginner-CLI-Tools/Configuring-ROS2-Environment.html 

We want this configuration to apply for any future shell sessions so we will put the commands into the bash.rc file in our container by entering the command:

echo "source /opt/ros/jazzy/setup.bash" >> ~/.bashrc


Now if we exit and re-enter our container the environment configuration should stick. Type ‘exit’ to leave the container shell. If this was the only connection to the container, then docker will stop the container (check with docker ps). Before we can connect to it again, we need to start it. Docker will have created a random name for the container, so we need to find out what that name is. Use the command: docker ps -a

This will list all containers including those which were stopped. You should see a container which was created from the image ros_docker, and a name which docker assigned to it. We need to start the container again before we can connect to it. Run the command:

docker start <container_name>


Now we can enter a new interactive shell in it by using the command:

docker exec -it <container_name> bash


But substitute <container_name> with the actual name of your container. Once in the new shell, we can test that our environment is configured as expected:

printenv | grep -i ROS


This should list all the ROS environment variables. So you should see something like this:

ROS_VERSION=2 ROS_PYTHON_VERSION=3 PWD=/ros2_ws AMENT_PREFIX_PATH=/opt/ros/jazzy CMAKE_PREFIX_PATH=/opt/ros/jazzy/opt/gz_math_vendor:/opt/ros/jazzy/opt/gz_utils_vendor:/opt/ros/jazzy/opt/gz_cmake_vendor ROS_AUTOMATIC_DISCOVERY_RANGE=SUBNET PYTHONPATH=/opt/ros/jazzy/lib/python3.12/site-packages LD_LIBRARY_PATH=/opt/ros/jazzy/opt/rviz_ogre_vendor/lib:/opt/ros/jazzy/lib/aarch64-linux-gnu:/opt/ros/jazzy/opt/gz_math_vendor/lib:/opt/ros/jazzy/opt/gz_utils_vendor/lib:/opt/ros/jazzy/opt/gz_cmake_vendor/lib:/opt/ros/jazzy/lib PATH=/opt/ros/jazzy/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ROS_DISTRO=jazzy


If all that is as expected, then we should be able to run a ros2 command:

ros2 pkg executables 


This should list all the ros2 packages which are installed (the desktop version has quite a lot of them).


We now have a working ROS2 install on our Raspberry Pi. But before we get into the tutorials, it will be helpful to know a few more docker commands.


Some Useful Docker Commands

Now we have played around with docker a bit, it is useful to know how to manage the docker artifacts we may have created. For example, we will have created a local copy of the hello-world image, and a hello-world container for each time we ran the docker run hello-world command.


List your docker images:

docker image ls


List your docker containers:

docker ps -a


(The -a flag includes stopped containers. Without it, only running containers are shown).


To stop and start containers, we can use their container names with these commands:

docker start <container_name>

docker stop <container_name>


To clean up docker containers, note their container ID shown with the docker ps -a command and delete them using their ID:

docker rm <container ID>


Each time we use the docker run command with an image name, we create a new container. We don’t want this, as our environment will not be configured in the new container. So use start and stop instead once we have a container. (This goes against the philosophy of Docker, where containers are created and destroyed so you always run from a clean state. But we are just running some tutorials here, so our container will be treated as persistent with the configuration here. We’ll go deeper into docker and the docker compose tools in another tutorial).


We have run some shell commands, but our container will not have a graphical context so the GUI tools will not run. Try running the rqt command in your container and you will get an error about not being able to access the display. We need to give our container the hooks to access our display.


At the time I wrote this post the method in the following article worked: https://wiki.ros.org/docker/Tutorials/GUI 

But since that time Raspberry Pi OS has switched to Wayland from X11 and this no longer works. I have updated this with the options which I have found to make this work on Wayland on the current (19th Oct. 2025) OS image. You have to provide the parameters when you create the container, so we will have to start again with a fresh container.


For older X11 based systems:


docker run -it \
    --env="DISPLAY" \
    --env="QT_X11_NO_MITSHM=1" \
    --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
    ros_docker

For the Wayland based Pi OS:

docker run -it \
    --volume=/tmp/.X11-unix:/tmp/.X11-unix \
    --volume=$XDG_RUNTIME_DIR/wayland-0:/run/user/$(id -u)/wayland-0 \
    --env=XDG_RUNTIME_DIR=/run/user/$(id -u) \
    --env=WAYLAND_DISPLAY=$WAYLAND_DISPLAY \
    --env=DISPLAY=$DISPLAY \
    --device=/dev/dri:/dev/dri \
    ros_docker

Now we have a container which can access the graphics display of the host Pi. We need to configure the environment again in our new container:

source /opt/ros/jazzy/setup.bash


Or add the command to .bashrc in the container to make it persistent:

echo "source /opt/ros/jazzy/setup.bash" >> ~/.bashrc


We also need to grant permissions to allow the display to be accessed. In a terminal in the host Pi, run the command:

xhost +local:docker


Note this is not secure, but it only grants display access to docker processes and only lasts until the next reboot, so I was happy with this on a private network. The linked article goes into more detail, and gives some other options. But this was the simplest one to get working on a Raspberry Pi.


Now, in the shell in the container, you should be able to run the command:

rqt


The GUI tool window should open successfully on your host Pi desktop. Now you have a container which can run graphical tools. If you get an authorisation error, check you have run the xhost +local:docker command again since you last booted up the Pi.


Remember we just created a new container, so run docker ps -a to find out what the new container name is, and use this one for the graphical tools in the tutorials. You can rm the older container as we no longer need it.


Recap

First we built a docker container image for a specific version of Ubuntu and a specific ROS2 distribution. We used the desktop variant as it contains all the packages needed for graphical demos and the tutorials.

Next we created a docker container using this image to run ROS2. We created it with a configuration so it can run graphical tools on the host desktop. We set up the bash.rc file to configure the shell each time we start a new shell, so the ROS2 tools can be run. Once we had created this container, we start it with docker start <container_name> and enter a new shell with the command docker exec -it <container_name> bash


We don’t need to use docker run again, unless we want to create a new container with a different configuration.


Running the TurtleSim Tutorial in Docker

The turtlesim tutorial takes you through some basics of ROS using a graphical simulation tool and the ROS GUI tool RQT. The documentation is at:

https://docs.ros.org/en/jazzy/Tutorials/Beginner-CLI-Tools/Introducing-Turtlesim/Introducing-Turtlesim.html

But we can skip the install steps with our preconfigured docker container and jump straight to the run step. First open an interactive shell in your docker container (you need it to be started first). Then in the shell, run the command:

ros2 run turtlesim turtlesim_node


The turtlesim window should appear on your host desktop. 

The shell window will be tied up now, running the simulation node and displaying output from it. So to run another node to control the turtle, we need to open a new terminal window on the host, and start a second interactive shell in the same docker container:

docker exec -it <container_name> bash


Now in this second shell, we can run the turtle_teleop_key node:

ros2 run turtlesim turtle_teleop_key


This should launch the turtle_teleop_key node and you should be able to control the turtle with the arrow keys. The tutorial then suggests some ros2 commands you can run. So you will need a new terminal window to execute another interactive shell in the docker container. You can use this to run the ros2 commands to list nodes, and later run the rqt tool to follow the tutorial steps to execute various api calls to the turtlesim node.


Hopefully this has got you up and running with ROS in Docker on the raspberry Pi. If there are any steps that didn’t make sense, please let me know so I can fill in any gaps in this guide.


See the next post in this blog for how to extend this using Docker Compose.