(This article is now updated to work with Pi OS on Wayland)
Continuing our learning of ROS2 running in Docker containers on top of Raspberry Pi OS (see previous post in this blog). The management of the Docker containers can be further simplified with the Docker Compose tool. If you followed the Docker installation instructions in my previous post then you already have Docker Compose installed. So you can get straight on with using it.
Docker Compose allows us to manage multiple containers with a single configuration file. A Docker network is automatically created so the containers can communicate with each other. The tool works from a yaml file named docker-compose.yml which contains the complete configuration of each container.
In the previous post, we created our container with a command line containing several parameters. These can be put into the docker-compose.yml file to simplify the process.
Start with an empty directory for your ros docker compose project. and create a new file named docker-compose.yml in this directory.
If you are using an older Pi OS image using X11, add the following contents to your docker-compose.yml file:
services:
sim:
image: ros_docker:latest
environment:
DISPLAY:
QT_X11_NO_MITSHM: 1
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix:rw
command: ros2 run turtlesim turtlesim_node
Unfortunately this no longer works since Pi OS updated to use the Wayland display server. The configuration with Wayland requires us to go a little deeper into docker compose and user permissions configuration.
ROS in Docker under Wayland
I created my project in a folder named 'wayland-jazzy'. Docker Compose will use the folder name to generate docker image and container names, so you can tell which project they belong to. We need some environment variables defining for our Pi user. Rather than hard coding these, we will fetch them from our actual logged in user. From a terminal prompt in your project folder, run the following commands:
echo "LOCAL_UID=$(id -u)" > .env
echo "LOCAL_GID=$(id -g)" >> .env
You should now have a file named .env containing the user and group ids from your machine. Check the contents of this:
cat .env
My IDs were both 1000, but yours may be different. We can now pull these into our docker image so that it can set up the required user and groups. This means we need a project Dockerfile. Start by copying the Dockerfile from the ros_docker repo we checked out in the previous tutorial. The jazzy desktop on noble image for example is in the folder:
~/docker_images/ros/jazzy/ubuntu/noble/desktop
We are going to customise this Dockerfile to add a missing group (group name: render), add the ubuntu user to this group and specify that we want to run the contain using the user ubuntu. But first we need to get the group id for the render group from our host machine. Run the following command:
getent group render
This should display the group information, including an id. Mine returned:
render:x:105:footleg,vnc
This is showing that the render group has id 105 on my host, and the users in the group. What we need is that id. Open the Dockerfile you copied into your project folder, and add the following (but use the render group id you got instead of 105 if your id was different):
RUN groupadd -g 105 render || true \
&& groupadd -g 997 input || true \
&& usermod -aG render,input ubuntu
USER ubuntu
This entire Dockerfile contents in your project folder should be something like this (the jazzy version may have changed since this was written, so match what you got from your checkout of the docker_images repo):
FROM ros:jazzy-ros-base-noble
# install ros2 packages
RUN apt-get update && apt-get install -y --no-install-recommends \
ros-jazzy-desktop=0.11.0-1* \
&& rm -rf /var/lib/apt/lists/*
RUN groupadd -g 105 render || true \
&& usermod -aG render ubuntu
USER ubuntu
For Wayland we are also going to build our docker image using compose. So rather than referencing an existing image name (like we did in the older x11 version above), we will add a build section to build the image using our local Dockerfile. We also need a user section which reads our local environment variables from our .env file, and we need to add the group 'render'. The rest of the configuration is brought over from our Wayland container creation command line used in the previous tutorial. Here is the complete docker-compose.yml file for Wayland:
services:
sim:
build:
context: .
args:
LOCAL_UID: ${LOCAL_UID}
LOCAL_GID: ${LOCAL_GID}
user: "${LOCAL_UID}:${LOCAL_GID}"
group_add:
- render
environment:
- DISPLAY=${DISPLAY}
- WAYLAND_DISPLAY=${WAYLAND_DISPLAY}
- XDG_RUNTIME_DIR=${XDG_RUNTIME_DIR}
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix
- ${XDG_RUNTIME_DIR}/wayland-0:${XDG_RUNTIME_DIR}/wayland-0
devices:
- /dev/dri:/dev/dri
command: ros2 run turtlesim turtlesim_node
Before we can create a container, we need to build the image. This will take a few minutes. Run the following command and then go an have a well deserved hot drink of your choice while it runs:
docker compose build
Whether you were using the older x11 compose file, or the newer Wayland one, from here on we should be back on track to create a working container which can display graphical applications on the host display.
The docker-compose.yml file contains a single container definition 'sim'. To create the container run the command:
docker compose up -d
This will create a new container, start it up and run the command to launch the turtlesim_node in the turtlesim package. The '-d' option runs the commands in the background, returning the terminal prompt to you (otherwise the terminal window is tied up until the node is terminated).
You should see the TurtleSim window open. If it did not open, check that you have granted docker permission to access xhost since you last rebooted the Pi?
The command is: xhost +local:docker
Run the command docker ps to see what containers are running. You should see a new container was created named 'wayland-jazzy-sim-1'. This time instead of being randomly generated the container name has been taken from the folder name, and service name used. As it is the first instance, it is appended with '-1'.
If you close the TurtleSim window, the container will exit as the node it was running has terminated. If you run the 'docker compose up -d' command again it will restart the existing container, rather than creating a new one each time, compared to the 'docker run' command we used previously.
We can add more services to run additional containers and launch them as a group. We can use the image already built for our sim service for the other containers, so instead of copying the build configurations, we will refer to the existing image. List your docker images to find the name of the project image:
docker image ls
See the image name is taken from the project folder name, and the service it was build from:
REPOSITORY TAG IMAGE ID CREATED SIZE
wayland-jazzy-sim latest dd5c7d36625b 18 minutes ago 3.84GB
We can see that the docker image is 3.84GB is size. Docker is clever enough to manage images and share contents between them, so if we create more images from the same source images, most of the disk space will only be used once across all the images for the jazzy-ros-base-noble image references in the Dockerfile and the desktop packages added by the first RUN command in that file. We will look at docker disk space in more details at the end of this tutorial. For now, we just needed the name of our image. We can use this to add a second service 'dev-build' to our docker-compose.yml file:
services:
sim:
build:
context: .
args:
LOCAL_UID: ${LOCAL_UID}
LOCAL_GID: ${LOCAL_GID}
user: "${LOCAL_UID}:${LOCAL_GID}"
group_add:
- render
environment:
- DISPLAY=${DISPLAY}
- WAYLAND_DISPLAY=${WAYLAND_DISPLAY}
- XDG_RUNTIME_DIR=${XDG_RUNTIME_DIR}
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix
- ${XDG_RUNTIME_DIR}/wayland-0:${XDG_RUNTIME_DIR}/wayland-0
devices:
- /dev/dri:/dev/dri
command: ros2 run turtlesim turtlesim_node
dev-build:
image: wayland-jazzy-sim
environment:
- DISPLAY=${DISPLAY}
- WAYLAND_DISPLAY=${WAYLAND_DISPLAY}
- XDG_RUNTIME_DIR=${XDG_RUNTIME_DIR}
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix
- ${XDG_RUNTIME_DIR}/wayland-0:${XDG_RUNTIME_DIR}/wayland-0
devices:
- /dev/dri:/dev/dri
command: rqt_graph
This second service uses the existing image, and runs the 'rqt_graph' tool. Now when you run 'docker compose up -d' you should see you have 2 windows appear, and 2 containers listed when you run 'docker ps'. If you close one of the tool windows (stopping its container) and run 'docker compose up -d' again, it will detect that one of the containers is already running, and just launch the one which is stopped. As these containers were launched from the same docker-compose file, they are automatically in the same docker network. So any nodes run in the dev-build container can control the TurtleSim node as in the ROS Tutorials.
As before, you can open an interactive shell in these containers, using the exec command:
docker exec -it wayland-jazzy-dev-build-1 bash
This allows us to set up the shell to source the ROS packages as before, by running the following command from the shell:
echo "source /opt/ros/jazzy/setup.bash" >> ~/.bashrc
Now exit the shell, and use docker exec to start a new one. You should now be able to run ROS2 commands from the shell. The rqt_graph window can show active docker nodes and their connections. Click the refresh icon at the top left in this windows and it should show the single /turtlesim node.
Run the command: ros2 run turtlesim turtle_teleop_key
Now refresh the rqt_graph again. Now it should show a pair of nodes with connecting arrows for control, status and feedback. You should be able to drive the turtle in the sim window, when the host terminal running the turtle_teleop_key node has focus.
Interestingly, it was not necessary to source the setup.bash script in the docker-compose.yml file to run the ROS2 commands directly there. I am not clear why it works, but it does so I'm happy with it. You can stop the contains by simply closing the windows when you are done playing.
Access to the file system of the Host
Now we can use Docker Compose to manage our containers, we can take a look at mapping the host Pi file system inside our containers. Following the Client Libraries tutorial at: https://docs.ros.org/en/jazzy/Tutorials/Beginner-Client-Libraries/Colcon-Tutorial.html we need to create a folder on our host Pi called 'ros2_ws'. I created this folder in my home directory. Next add the path to this folder on the host, to the path '/ros2_ws' in the dev-build container, by adding it to the volumes section in the docker-compose.yml file as follows:
dev-build:
image: wayland-jazzy-sim
environment:
- DISPLAY=${DISPLAY}
- WAYLAND_DISPLAY=${WAYLAND_DISPLAY}
- XDG_RUNTIME_DIR=${XDG_RUNTIME_DIR}
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix
- ${XDG_RUNTIME_DIR}/wayland-0:${XDG_RUNTIME_DIR}/wayland-0
- ~/ros2_ws:/ros2_ws
devices:
- /dev/dri:/dev/dri
command: rqt_graph
Stop the container (by closing the rqt tool window) and start it again by running the 'docker compose up -d' command. Now if you open a shell in the dev-build container, you should see a folder /ros2_ws which will be mapped to the folder ~/ros2_ws on the host. This should enable you to run the package building tutorials with the files being saved outside your container. So if your container is ever destroyed, the files are not lost when a new container is created by Docker Compose. Note that as we have changed our docker-compose.yml file, Docker Compose will detect the change and destroy the old container and recreate it. So once it starts up again, we will need to source our ROS2 packages again (just once if we put the command into the bash.rc file again).
In the next post, we will look at using a game controller in ROS in a Docker container.
No comments:
Post a Comment