Saturday, January 10, 2026

System hardening: Migrating from docker compose to podman-compose

TL;DR: Podman-Compose provides essential security features for homelabbers, albeit with a few inconveniences.

Many applications that appeal to homelabbers can be installed with a docker-compose file. Docker-compose files are awesome! They provide a nice uniform way to install wildly different applications.

The most used runtime is Docker. Unfortunately, docker runs as root and container-escape vulnerabilities are abundant. Therefore, a hacked application immediately leads to a fully compromised system. Luckily, there are viable alternatives. For example, podman explicitly allows running containers without root.

This article summarizes what we learned when we migrated our seven homelab services from docker compose to podman-compose on a Ubuntu 24.04 server.

Preparations

Install Podman and Podman-compose

This is by far the simplest section of this article. As root, run:

apt install podman podman-compose

How to prepare non-root user(s)

Since we want to run our services without root access (rootless), we need to create a non-root user. To maximize isolation, we have chosen to create one user per application. However, some applications share a mounted directory, so they run as the same user. For example, we run Syncthing and Apache HTTPD as the same user because Apache serves files from a Syncthing directory.

These are the steps:

  1. Create a non-root user without a shell. This prevents remote logins.

  2. Enable linger. This allows the user to run processes, even when it is not logged in.

  3. By default Podman does not start containers after server boot. Most documentation will tell you to use podman generate systemd to generate a systemd unit file. However, a much simpler approach is to enable the existing podman-restart systemd service.

Below are the steps to perform this process for the user named "immich":

APPUSER=immich
# 1. create user:
sudo useradd -m -s /usr/sbin/nologin $APPUSER
# 2. allow user to run services even when it is not logged in:
loginctl enable-linger $APPUSER
# 3, Enable restart after boot:
sudo -u $APPUSER mkdir -p /home/$APPUSER/.config/systemd/user/
sudo -u $APPUSER cp /lib/systemd/system/podman-restart.service /home/$APPUSER/.config/systemd/user/
systemctl --user --machine ${APPUSER}@ enable podman-restart.service

How to run podman as the non-root user

In the previous section we created the users with the nologin pseudo-shell. It is therefore not possible to login as that user. It might be tempting to use su or sudo to switch to the non-root user. However, do not do this! Neither command creates the required 'login session'. If you try anyway, it may appear to work, but you will get problems in the future. For more details see sudo rootless podman.

One way to create a shell with a login session is to use machinectl:

APPUSER=immich
machinectl -q shell ${APPUSER}@ /bin/bash

In this shell it is safe to run commands like podman ps.

How to prepare the docker-compose file

Prepare a directory in the user's home directly, and place the docker-compose.yaml file you downloaded from the service's website in it. However, there are a few things that need to be changed to work with podman-compose and rootless.

These are the changes we found:

  1. Make image names fully specified. In particular you will need to prepend docker.io/ when the container registry is missing from the name. For example, image: wallabag:1.41 becomes image: docker.io/wallabag:1.41. Podman has a catalog of some short names. For example, image: ubuntu works fine.

  2. Configure your application to bind to ports above 1024 (unprivileged ports). Even if the application thinks it is running as root inside the container, and there is a port mapping to a higher port, it is still not allowed to bind privileged ports.

    You may also need to adjust the application configurations within the container. See below for some tips.

    Another solution is to change the lowest privileged port on the host system. For example with: sysctl -w net.ipv4.ip_unprivileged_port_start=80. This should be safe as long as you have a firewall in place (which you do, right?!).

  3. Replace restart: unless-stopped with restart: always. The systemd restart service only supports always.

  4. Disable health checks. We have yet to find a more better way to reduce the enormous amount of garbage logging that podmam produces.

Start the application as the non-root user

With the above done, we are ready to start the containers with podman-compose. For example, as root, run:

cd /home/wallabag/podman-wallabag
machinectl -q shell wallabag@ podman-compose pull
machinectl -q shell wallabag@ podman-compose up -d

To make things repeatable, you should create a script. Here is the script that we use to run Wallabag:

#!/bin/bash
set -euo pipefail
IFS=$'\n\t'
cd "$(dirname "$0")"

podman-compose pull
podman-compose down
podman-compose --podman-run-args=--log-driver=none up -d
sleep 2
podman-compose exec wallabag /var/www/wallabag/bin/console doctrine:migrations:migrate --env=prod --no-interaction
podman-compose exec db psql --user=postgres -c 'ALTER DATABASE wallabag REFRESH COLLATION VERSION'
podman image prune --force

Which root can run with:

machinectl -q shell wallabag@ /home/wallabag/podman-wallabag/pull-and-start.sh

User ID mapping and running Apache HTTPD

Apache HTTPD requires more attention than most services. It insists on running as root and then stepping down to another user. While this is a good security measure, it is also annoying because the user which it steps down to in the container (on Ubuntu, this defaults to www-data with user ID 33) gets mapped to a completely unique user ID (something like 100032) on the host. This makes it more difficult to share a mounted directory.

Fortunately, podmap has an option to map some users (inside the container) to the user that started the container on the host. For example, when podman-compose is run by user mainsite, the parameter --userns=keep-id:uid=0,uid=33 maps the in-container users 0 and 33 to the mainsite user on the host. (Note that user 0 (root) is already mapped as such by default.)

Here is the list of changes we had to make while building the apache container:

  • Add USER root somewhere to the end of the Dockerfile (but before ENTRYPOINT or CMD).
  • The file /etc/apache2/ports.conf should contain the text Listen 8080
  • Update the VirtualHost tags in all /etc/apache2/sites-enabled files so they look like this: <VirtualHost *:8080>.
  • Run podman like this: podman-compose --podman-run-args=--userns=keep-id:uid=0,uid=33 up -d

Why this is great!

Although Docker is very easy to use, we are happy that we no longer have to deal with these annoyances:

Solved Docker annoyance #1 — root access

This was the whole point of this exercise! Applications no longer have root access on the host system, which makes full system takeovers after a breach less likely.

Solved Docker annoyance #2 — network misery

Docker's networking setup collides very hard with the firewalls we have used (Shorewall and UFW). You have to jump through hoops to get everything working reliably. These issues are all gone with podman. The slirp4nets network mode simply opens a port without trying to change iptables.

(Hopefully) solved Docker annoyance #3 — poor image cleanup

Even running docker image prune often does not prevent an ever growing /var/lib/docker directory. Hopefully, podman does not have this problem. At least, the images are cached per user and are therefore easier to clean up.

Podman is not perfect

Here are some issues we encountered with podman:

Podman annoyance #1 — logging

For some reason, podman and podman-compose like to log every little detail. The result is that syslog becomes so cluttered with useless data that it becomes unusable. Moreover, this increases wear on our SSDs. We have yet to find a good way to deal with this. For now, we have disabled health checks, and added

[engine]
events_logger = "none"

to each user's .config/containers/containers.conf. You may have also noticed the --podman-run-args=--log-driver=none argument in the start script above.

Perhaps the solution lies in logging directly to syslog (which supports filters) instead of via journald (which doesn't).

Podman-compose annoyance #2 — no incremental changes

Docker compose is smart, it detects and applies only the necessary changes. Podman-compose, however, is not so advanced. Our workaround is to always run a podman-compose down before running a podman-compose up -d.

Removing Docker

Once all services have been migrated, docker can be removed. This is not just a matter of running apt remove docker.io docker-buildx docker-compose-v2 (in case you have Ubuntu's stock docker installed). You have to actively search for remnants. For example with find / -iname '*docker*' 2>/dev/null. In particular you should delete /var/lib/docker (for fun, first do du -h -s /var/lib/docker to see how much disk space Docker needed).

Aside: podman-compose or docker compose on top of podman?

It is possible to run docker compose over podman. This should give you the best of both worlds: the sleek and complete support from docker compose, and the rootless safety of podman.

We did not go this route because:

  • Even though podman-compose is a bit rough, it does what we need.
  • Using tools from the same family feels more future-proof. Good luck when you have interoperability issues!
  • For docker-compose to work, you need to set up a docker-socket. More moving parts mean less reliability.

Conclusions

  • With minor changes you can migrate from docker compose to podman-compose.
  • Podman is not as polished as Docker.
  • Some docker annoyances disappear, but are replaced by podman annoyances.
  • Using docker for internet-facing applications is irresponsible. Using rootless podman fixes that.

Saturday, September 27, 2025

The incomplete guide for sending email notification from the Ubiquiti's Unifi Cloud Gateway (UCG)

Unfortunately, sending notification emails from a Unifi Cloud Gateway (UCG) with remote management disabled, is not all straight forward. Here are some tips, though it will end with a disappointment.

Check list:

  1. Send a test email
  2. Weird stuff for local email servers
  3. Configure an email address for the admin user
  4. Configure an alert
  5. Wait for Ubiquiti to fix it

1. Send a test email

In 'Settings', 'General', setting 'Email Services', select 'Custom Server'. Fill in the details and press 'Send test email'. If it works, skip to section 3, otherwise, read on.

As 'SMTP Server' you need to fill in the fully qualified hostname of the email server. The hostname must match the DNS name in the TLS certificate of the email server. If there is a mismatch, UCG will reject the connection. Therefore, an IP address does not work!

The standardized SMTP submission port is 587, with SSL disabled. No worries, due to STARTTLS the traffic is still encrypted.

If port 587 does not work, your email server may support the legacy port 465, with SSL enabled.

If that also does not work, you may try port 25 with SSL disabled (again, STARTTLS should encrypt the traffic).

2. Weird things for local email servers

Something odd happens when the DNS name of the email server actually resolves to the UCG, and you have port forwarding for port 25 and 587 to the local device that contains the email server.

The problem is that 'hairpinning' does not fully work on the UCG. Ubiquiti describes hairpinning as follows (source):

When a device on the local network attempts to connect to the public IP address of the UniFi gateway, the traffic is redirected internally, ensuring that port forwarding rules apply as they would for external requests.

Hairpinning is super useful, but Ubiquiti's interpretation is not good enough! To reach the email server, the UCG itself should also be able to use hairpinning. Unfortunately, this is not supported.

Luckily, I learned a workaround from Ubiquiti's support staff. We can give the local email server a 'local DNS record', a kind of a DNS override. We set the local DNS record equal to the fully qualified hostname (e.g, mail.example.com) of the email server. After this change, any DNS client in the local network, including the UCG itself, resolves mail.example.com to the IP address of the local device and not to the public IP address of the UCG.

Here is how to set this up: in 'Client Devices', click the device that runs the email server. In the left panel click the cog-icon (settings). Check 'Local DNS Record' and enter the fully qualified hostname of the email server, and click 'Apply Changes'.

We check can that it works by using something like dig from any machine in the local network:

# Before % dig +short mail.example.com 1.1.1.1 # some public IP address # After % dig +short mail.example.com 192.168.1.24 # a local IP address

Try another test email (see section 1) before you continue.

3. Configure an email address for the admin user

Click 'Admin and users' (bottom left icon), click the relevant user. In the left panel click the cog-icon (settings). Enter the email address, and click 'Apply Changes'.

4. Configure an alert

Click 'Alarm Manager' (second icon from bottom left). Select all alarms you want to receive an email for. Then in the left panel make sure 'Email' is selected and click 'Save'. If you create new alarms, you may have to repeat the process.

5. Wait for Ubiquiti to fix it

If you have gotten this far (like I have), it was all for nothing. According to this discussion, you won't get email notifications, unless you enabled Remote Management, if only for 1 second.

I have reached out to Ubiquiti support and I will update this article when more information arrives.

Update 2025-09-28: The mentioned discussion thread shows a screenshot in which Ubiquiti states that 'it should work better' in UnifiOS 4.4.x. No release date is know at this moment.

Update 2025-12-03: Meanwhile our UCG upgraded to UnifiOS 4.4.9. Unfortunately, still no e-mail are being sent.

Sunday, August 31, 2025

Reorganizing our server shelf

TL;DR: With some planning and tinkering, you can fit a lot of hardware in a small space.

In Dutch homes, the meter cupboard (called a "meterkast" in Dutch) is a small closet, usually placed directly behind the front door. It houses the electric meter, gas meter, water meter and the circuit breakers. In our case it is also the entry point for the internet with an (ADSL) telephone line, and more recently two glass fibers (yes we can choose 🤷).

The top shelf of our meter cupboard is the perfect spot for our family's little 'data center'. The only downside is that it is small; it measures 18cm/7" high, 75cm/30" wide and 30cm/12" deep. The challenge is housing all our electronics there: our home server, an entry-model Synology NAS, the internet modem, an ethernet switch and all the cables and power adapters, including those for our wifi access points.

Since we switched from ADSL to fiber, it was a good moment to reorganize. Our internet provider (Freedom, highly recommended) supports bring-your-own internet modems. The default provided AVM Fritz!Box is quite large. Since our switch and wifi acces points are from Ubiquiti anyway, I bought the Unifi Cloud Gateway Ultra. It is very small and at €94, it is cheaper than the Fritz!Box 5590 which costs €180 through Freedom or €225 through a retailer. The Fritz!Box does come with smart home and DECT phone support. We don't need a smart home, but we do want DECT. To fill the gap, we found a secondhand Grandstream DECT/VoIP server with two handsets for just €50 on Marktplaats (our old handsets needed to be replaced anyway).

Although the internet modem, ethernet switch, and DECT/VoIP server are small, they still need to be stacked to save space. For this purpose I designed a small rack. Snijmeesters, a cutting shop in my city, laser-cut it from 3mm birch wood.

Here you see the parts.

Here you see the rack being glued together. To keep the sides straight and make it easy to remove any spilled glue, I used two glass containers.

And here is the result. It is quite sturdy, sturdier than I had imagined.

Using your own internet modem had one unexpected consequence. To convert from fiber to ethernet, you need an ONT. Freedom provides a Huawei EG8242H ONT for this purpose. It turns out that this ONT is a very large box! Making room for it on the shelf would have been difficult. However, since the provided fiber was not long enough anyway, it now hangs lower in the meter cupboard. I tried to find a smaller alternative, but it is hard to find a compatible product at a decent price. In the end we left it like this.

Here is our updated server shelf. The new rack is on the left. We have had it place for a couple of weeks now without any problems.

Sunday, August 10, 2025

Self-hosted open-source multi-user multi-platform secret management

TLDR: Syncthing, KeePassXC, Keepass2Android, AuthPass, and WebDAV via Apache HTTP Server allow for self-hosted open-source multi-user multi-platform secret management.

This article describes the secrets management setup used by me and my family. This is not a tutorial, but rather an overview of the possibilities and what works for us.

The setup:

  • is fully open source with Open Source Initiative-approved licenses
  • is multi-platform, it supports macOS, Linux, Windows, iPhone, and Android
  • is multi-user, you can share secrets
  • is self-hosted with low maintenance
  • supports passwords, TOTP, SSH keys and any other secret
  • has browser support
  • does not require tech-savvy family members (one is enough)

The tools

KeePassXC, Keepass2Android Offline and AuthPass

These are three nice and complete apps that all support secret databases in the KeePass format. Although some variations exist, I have never experienced interoperability issues with these tools.

To use KeePassXC in the browser, you need a browser add-on. Many browsers are supported. Keepass2Android and AuthPass integrate well with the Android and iOS environments and don't require additional software.

On Andriod we're using the offline version of Keepass2Android since we already have Syncthing to get the keepass file.

Bonus features

Bonus feature 1: KeePassXC can also be used as an SSH-agent. This allows you to use SSH-keys as long as the KeePass database is unlocked. The SSH keys are synced along with all the other secrets. No more private key files on disk!

Bonus feature 2: if you ever lost a phone with Google Authenticator installed, you know how painful it is to set up 2FA with TOTP again. Configure TOTP in these apps instead, and that worry is gone.

Syncthing

Syncthing is an amazing program. It just continues to work with very little maintenance. It can synchronize multiple folders between multiple devices. Once a device is connected to another device, they automatically find each other over the internet.

Each person stores their main secrets database in a 'small' folder containing only the files they want to sync to their phone. This small folder is not shared between people. Then there are 'larger' folders that are optionally shared between multiple people. These larger folders are only synchronized between desktops and laptops and are a good place to store shared KeePass databases.

To ensure that all devices with Syncthing always stay in sync, it is a good idea to share all folders with a machine that is always on. Ideally the Syncthing port (22000) would be exposed directly to the internet. This reduces sync conflicts because it is more likely that all devices see the changes from the other devices.

Since you're going to create many folders, think about a naming convention. Our folders start with the name of the owner. The Syncthing folder name can be different from the directory in the file system. For example, the Syncthing folder could be named erik-documents while on my device the directory is called documents.

Even though there is a very nice Android application, Google has made it maddeningly difficult to publish to the Play store. So difficult even, that the maintainers have given up. Fortunately, you can still get a maintained fork via F-Droid or Google Play.

Bonus features and tips

Bonus feature 1: Store all your documents in a Syncthing folder so that you can access them from multiple devices.

Bonus feature 2: Configure a Syncthing instance to keep older file versions. Now you have a backup too!

Bonus feature 3: Sync the camera folder on your Android phone.

Tip 1: Using Homebrew? The most convenient way to install Syncthing is with the command brew install --cask syncthing-app.

Tip 2: When starting a new Syncthing device, remove the default folder shared from directory ~/Sync. Instead, put the folders you're sharing below the ~/Sync directory.

Tip 3: Before you create any folder in Syncthing, change the folder default configuration to have the following ignore pattern. This is especially important when you use Apple devices.

(?d)**/.DS_Store
(?d).DS_Store
#include .syncthing-patterns.txt

Tip 4: All the GUIs of the locally running Syncthing instances have the same 'localhost' URL. Since the URL is the same, you should also use the same password. Otherwise, storing the password in KeepassXC becomes difficult.

Support iPhone with Apache HTTP Server and WebDAV

Due to limitations imposed by iOS (no background threads unless you are paid by Apple), Syncthing does not run on iPhones. Fortunately, we found AuthPass which supports reading from and writing to a WebDAV folder. AuthPass does this really well; if you make changes while being offline, it automatically merges those changes into the latest version of the remote database once you go online again!

Fortunately, we already have a Linux server running Apache HTTP Server that is always on. (The websites there are also synced with Syncthing.) By configuring a WebDAV folder in Apache HTTP Server (protected by a username/password), we can share a Syncthing folder with AuthPass. Each person with an iPhone will need their own WebDAV folder.

Sharing secrets with KeeShare

KeeShare is a KeePassXC feature that allows you to synchronize a shared KeePass database with your main database. Since the main database contains a complete copy of the shared database, you only need to set up KeeShare on one device. Other devices, including your mobile phone, do not require direct access to the shared database.

Since KeeShare is only supported in KeePassXC, you must periodically open KeePassXC. Otherwise, you will miss changes in the shared databases. Shared databases won't sync if you only use the KeePass database on your mobile phone.

Tip: Sharing secrets is limited; you can only share entire databases. Therefore, plan ahead and decide how you want to organize secrets. We settled on a shared database for the whole family, and another shared database for just me and my partner.
Since each shared KeePass database is password-protected, you can store them all in the same shared Syncthing folder. However, if you are sharing other things as well, you may want to create multiple Syncthing folders.

Maintenance

Sometimes two offline devices modify the same KeePass file. Later, Syncthing detects the conflict and stores multiple files, one for each conflicting edit. You can merge all the conflict files using the database merge feature in KeePassXC. After merging them into the main database, you can delete the conflict files. Unfortunately, there is no default way to detect the presence of these conflict files. I manually check the synced folders once every few months (or when I miss a secret!). If you build a detection script, please share!

Since Syncthing runs so quietly in the background, you won't notice when things go wrong. To prevent this, check the Syncthing UI every few months.

Not explored options

KeePassXC has a command-line interface. This could be useful for Linux servers or scripts.

Conclusion

We have used this setup for over four years and have found it to be user-friendly and low-maintenance. Even my teenager kids are fervent users. Despite losing several devices, our secrets have never been lost.

Updates

Update 2025-10-12: updated the links to the syncthing android app on F-Droid and play store.

Update 2025-10-12: recommend Keepass2Android Offline instead of Keepass2Android.

Tuesday, July 8, 2025

Shutting down Version 99 does not exist

July 2007, I was so fed up with Maven's inability to exclude commons-logging that I wrote a virtual maven repository to fake it. A few months later this became Version 99 does not exist. The virtual repository, has been running on my home server ever since.

In the early years, minor changes were made to increase compatibility with more tools.

Unfortunately, in 2011, the virtual repository lost its hostname.

Some time later, I reinstated a proper DNS name for the service: version99.grons.nl. For some unknown reason, I never blogged about this!

In 2013 the original version (90 line of Ruby with Camping), was replaced by a Go implementation written by Frank Schröder. This version (with super minor changes) has been in place ever since.

In the meantime, commons-logging has been replaced by Slf4j and tools have become better at excluding dependencies. Therefore, after almost 18 years, I am shutting down Version 99 does not exist. Version 99, it was fun having you, but luckily we no longer need you.

Saturday, June 21, 2025

Installing a theme for Launch Drupal CMS

Drupal CMS trial for Desktop is a wonderful way to try Drupal CMS. Unfortunately, you can't install new theme's from the admin api in the browser. Once you have selected a theme, for example Corporate Clean, there is an instruction on using a tool called composer. Its funny how there are so many pockets of developers where it is just assumed you have some particular tool installed.

As I found out, composer is a package manager for PHP, and it is installed inside the ~/Documents/drupal directory. This is the directory that the launcher creates on the Mac, the location may differ per OS. We also need a PHP runtime, I found one in the Launcher application bundle.
Bringing this all together, this are the commands to install a theme from the command line:

cd ~/Documents/drupal /Applications/Launch\ Drupal\ CMS.app/Contents/Resources/bin/php \ ./vendor/bin/composer require 'drupal/corporateclean:^1.0'

Sunday, June 8, 2025

Running Wallabag with Posgresql and Docker compose

Now that Pocket is going away, it is time to host a read-it-later app myself. After looking at a few options, my eyes fell on Wallabag. Its not all that smooth, but it works reasonably well.

I run several services with docker compose for its ease of upgrading, and, more importantly, for the ease with which you can get rid of a service once you no longer need it.

Since it didn't work out of the box, here is how I installed Wallabag with Postgresql, using Docker compose.

Installation

Create the directory /srv/wallabag. This is where wallabag will store the article images and its database.

Prepare the docker-compose.yaml file with:

services: wallabag: image: wallabag/wallabag restart: unless-stopped environment: - POSTGRES_PASSWORD=***_random_string_1_*** - POSTGRES_USER=postgres - SYMFONY__ENV__DATABASE_DRIVER=pdo_pgsql - SYMFONY__ENV__DATABASE_HOST=db - SYMFONY__ENV__DATABASE_PORT=5432 - SYMFONY__ENV__DATABASE_NAME=wallabag - SYMFONY__ENV__DATABASE_USER=wallabag - SYMFONY__ENV__DATABASE_PASSWORD=***_random_string_2_*** - SYMFONY__ENV__DOMAIN_NAME=https://wallabag.domain.com - SYMFONY__ENV__SERVER_NAME="Wallabag" - SYMFONY__ENV__LOCALE=en ports: - "8000:80" volumes: - /srv/wallabag/images:/var/www/wallabag/web/assets/images healthcheck: test: ["CMD", "wget" ,"--no-verbose", "--tries=1", "--spider", "http://localhost/api/info"] interval: 1m timeout: 3s depends_on: - db - redis db: image: postgres:17 restart: unless-stopped environment: - POSTGRES_PASSWORD=***_random_string_1_*** - POSTGRES_USER=postgres volumes: - /srv/wallabag/data:/var/lib/postgresql/data healthcheck: test: - CMD-SHELL - 'pg_isready -U postgres' interval: 5s timeout: 5s retries: 5 redis: image: redis:alpine restart: unless-stopped healthcheck: test: ["CMD", "redis-cli", "ping"] interval: 20s timeout: 3s

Replace the two secrets, change your DNS domain, and add more env variables as desired (see wallabag on docker hub for more information). Make sure you read the entire file.

Wallabag's auto initialization code doesn't really support postgresql that well. Howewer, with the following commands you should get it to work:

docker compose pull docker compose up -d docker compose exec db psql --user=postgres \ -c "GRANT ALL ON SCHEMA public TO wallabag; \ ALTER DATABASE wallabag OWNER TO wallabag;" sleep 30 docker compose exec --no-TTY wallabag \ /var/www/wallabag/bin/console doctrine:migrations:migrate \ --env=prod --no-interaction docker compose restart

What did we get?

You should now have a running Wallabag on port 8000. Go configure Caddy, Ngix, or whatever as a proxy with HTTPS termination.

Create a user

What you don't have yet, is a way to login. For this you need to create a user. You can do this with the following command:

docker compose exec -ti wallabag \ /var/www/wallabag/bin/console fos:user:create --env=prod

More commands are documented on Wallabag console commands. Do not forget the mandatory --env=prod argument.

start.sh

To make upgrades a bit easier, you can use the following script:

#!/bin/bash set -euo pipefail IFS=$'\n\t' docker compose pull docker compose up -d sleep 2 docker compose exec --no-TTY wallabag /var/www/wallabag/bin/console doctrine:migrations:migrate --env=prod --no-interaction docker image prune

Future stuff

Once I have figured out how, I will update this article with:

  • Backup
  • Fail2ban integration