# Self Hosting 2024/2025
This post is a point-in-time collation of notes about my self hosting setup prior to an anticipated rebuild. The plan is to move from a nice, stable Podman-based setup to a pointlessly over-engineered Kubernetes one. It’ll catalogue some of the nuances of working with Podman instead of Docker, some quality of life improvements, what services I’m using and what didn’t make the cut. There will hopefully be a follow up post about the over-engineered Kubernetes setup, probably deep into 2026 when it’s “done”.
Specs
| Make/Model | Lenovo ThinkCentre M720q |
| Processor | Intel i5-8600T (8th gen) |
| Storage | 256 GB SSD |
| Memory | 16GB |
| OS | Debian 12.5 Bookworm |
Architecture Diagram
This is about right - cobbled together with Claude Code. On inspecting the file, it would have been easier to do it manually!
OS
Debian seemed like a good choice for the server, using the netinst version as it’d help keep the footprint small. As the server sits under the family TV, this was a bit awkward. I commandeered the TV to do the setup, and it wasn’t the most enjoyable experience. To save pain, future attempts will likely use a standard install, or simply Ubuntu Server.
unattended-upgrades was configured to automatically install and update security patches.
Knowing myself and my inclination to not check if things are running I wrote a script to send mobile notifications when updates are installed or the power is cycled ensure that the system is up to date. The notifications are sent using https://ntfy.sh/ - it’s a solid service and would definitely recommend it as a simple notifier.
Implementation Details
Install unattended-upgrades and modify the a few config files…
Ensure security updates: /etc/apt/apt.conf.d/50unattended-upgrades
1 Unattended-Upgrade::Allowed-Origins {
2 "${distro_id}:${distro_codename}-security";
3 };
Enable automatic updates: /etc/apt/apt.conf.d/20auto-upgrades
1 APT::Periodic::Update-Package-Lists "1";
2 APT::Periodic::Download-Upgradeable-Packages "1";
3 APT::Periodic::AutocleanInterval "7";
4 APT::Periodic::Unattended-Upgrade "1";
The notification details are all pulled from straight my Logseq knowledge base, so please excuse the additional notes and formatting!
Post-upgrade notifier
- Executable script for notification
/usr/local/sbin/post-upgrade-script.py- Script contents
1#!/usr/bin/env python3 2from requests import post 3import datetime 4 5current_timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S") 6 7post("https://ntfy.sh/abc123", 8 data=f"Server updated at {current_timestamp}", 9 headers={ 10 "Title": "Unattended upgrade completed", 11 "Tags": "tada", 12 } 13 )
- Triggered by
aptd- Config at
/etc/apt/apt.conf.d/90post-upgrade-hooks- Contents
DPkg::Post-Invoke { "/usr/bin/env python3 /usr/local/sbin/post-upgrade-script.py"; };
- Contents
DPkg::Post-Invokewill only successfully trigger when adpkgoraptaction occursapt install --reinstall vim
- Config at
shutdown_notifier.service
- Executable script for notification
/usr/local/sbin/shutdown-script.py- Notification script contents
1#!/usr/bin/env python3 2from requests import post 3import datetime 4 5current_timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S") 6 7post("https://ntfy.sh/abc123", 8 data=f"Server restarted at {current_timestamp}", 9 headers={ 10 "Title": "System restart", 11 "Tags": "recycle", 12 } 13 )
systemdunit1[Unit] 2Description=Notify at Shutdown 3DefaultDependencies=no 4Requires=network-online.target 5 6 7[Service] 8Type=oneshot 9ExecStop=/usr/bin/python3 /usr/local/sbin/shutdown-script.py 10RemainAfterExit=true 11StandardOutput=journal 12 13[Install] 14WantedBy=multi-user.target- The trick was to not have an
ExecStart- The script doesn’t need to run on
poweroff.targetor similar runningnetwork-online.targethas long since finished
- The script needs to end when
multi-user.targetfinishes - thusExecStop
- The script doesn’t need to run on
- The trick was to not have an
Podman
Podman is running in rootless mode to prevent privilege escalation attacks. This requires all services to run on non-privileged ports (those from port 1024 and upward), and some iptables routing jiggery-pokery to ensure services appear as expected, externally (e.g. HTTPS is on port 443).
I’ve installed podman-compose, which is pretty much equivalent to docker-compose. There are a couple of accommodations you need to make when converting files from the latter, but it’s pretty straight forward.
Containersservices are managed usingsystemd` services.
Implementation Details
Port forwarding with iptables
As rootless Podman can’t use privileged ports, we need to forward traffic from expected ports (e.g. HTTP/S, DNS) to the relevant container ports (e.g. nginx, Pi-hole):
1sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080 # HTTP
2sudo iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 8443 # HTTPS
3sudo iptables -t nat -A PREROUTING -i eno1 -p udp --dport 53 -j REDIRECT --to-ports 1053 # DNS (UDP)
4sudo iptables -t nat -A PREROUTING -i eno1 -p tcp --dport 53 -j REDIRECT --to-ports 1053 # DNS (TCP)
docker.sock
In some docker-compose.yml files, you see:
1volumes:
2 - /var/run/docker.sock:/var/run/docker.sock
In a podman-compose.yml on rootless Podman, this looks like this:
1volumes:
2 - /run/user/$(id -u)/podman/podman.sock:/var/run/docker.sock:Z
The :Z suffix tells Podman to relabel the volume for SELinux (as private, unshared).
Podman user: 100999
When a container directory is owned by a non-root/non-system user (i.e. UID > 999), if that directory is exposed as a mount to the host, it will be owned by the 100999 user/group. This can cause some issues when trying to work with files in the mounted directory with your own user.
Create podman user/group:
1sudo groupadd -g 100999 podman
2sudo useradd -u 100999 -g 100999 -m -s /usr/bin/bash podman
Add your own user to the podman group:
1sudo usermod -aG podman $USER
This’ll give you access to mounted directory, though this is dependent on the permissions of the contents with respect to the group.
systemd container management
To ensure containers restart after a power cycle, each hosted service has an associated systemd service in ~/.config/systemd/user:
1[Unit]
2Description=Podman Compose for karakeep
3Requires=podman.service
4After=podman.service
5
6[Service]
7Type=simple
8WorkingDirectory=/home/myuser/karakeep
9ExecStart=/home/myuser/.local/bin/podman-compose up
10ExecStartPost=/home/myuser/karakeep/mdns.sh
11ExecStop=/home/myuser/.local/bin/podman-compose down
12ExecStopPost=/usr/bin/pkill -9 -f avahi-publish -a -R karakeep.local
13Restart=on-failure
14
15[Install]
16WantedBy=default.target
The ExecStartPost and ExecStopPost are basically publishing the service with avahi so the services can be resolved on th.local domain. There is probably a nicer way to do this, but I couldn’t get the avahi-publish directive to work entirely in the systemd service, so I wrapped it in a bash file and run that - mdns.sh:
1#!/bin/bash
2/usr/bin/avahi-publish -a -R karakeep.local 192.168.0.188 &
Container: nginx
nginx is setup as a reverse proxy. This routes requests to the server through to the relevant service based on the hostname used to access the service (e.g. openwebui.local). This service also handles the TLS auth with a self-signed certificate authority.
Implementation Details
nginx-config.conf
We have a catch all forwarding in our http block:
1 # catch all server block redirecting http to https counterpart
2 server {
3 listen 8080;
4 server_name _;
5 return 301 https://$host$request_uri;
6 }
A typical service looks like this:
1 server {
2 listen 8443 ssl;
3 server_name karakeep.local karakeep.home;
4
5 ssl_certificate /etc/nginx/ssl/karakeep.crt;
6 ssl_certificate_key /etc/nginx/ssl/karakeep.key;
7
8 location / {
9 proxy_pass http://192.168.0.188:3000;
10 proxy_set_header Host $host;
11 proxy_set_header X-Real-IP $remote_addr;
12 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
13 proxy_set_header X-Forwarded-Proto $scheme;
14 }
15 }
Creating TLS certs
The server isn’t accessible from the Greater Internet (i.e. it’s just on the local network), so I am using self-signed certificates instead of Let’s Encrypt. The created certificate authority cert/keys are ca.crt and ca.key. More information on doing this can be found here.
To make service certification easy, here’s a crude script, that works well if you use it properly:
1$ export AUTOGEN_SVC=my_new_service
2$ export AUTOGEN_ORGUNIT=Some Org Unit
The script:
1#!/bin/bash
2AUTOGEN_SVC=$1
3AUTOGEN_ORGUNIT=$2
4
5NGINX_PROXY_TLS_PATH=/home/serveruser/nginx_proxy/ssl
6ARCHIVE_DIR=/home/serveruser/tls/archive
7
8# Check if both parameters are provided
9if [ -z "$AUTOGEN_SVC" ] || [ -z "$AUTOGEN_ORGUNIT" ]; then
10 echo "Usage: $0 <service_name> <ssl_organisational_unit>"
11 exit 1
12fi
13
14# Ensure paths exist
15if [ ! -d "$NGINX_PROXY_TLS_PATH" ] || [ ! -d "$ARCHIVE_DIR" ]; then
16 echo "NGINX proxy path or archive directory does not exist."
17 exit 1
18fi
19
20# copy template and update
21cp template.cnf "$AUTOGEN_SVC.cnf"
22sed -i "s/changeme/$AUTOGEN_SVC/g" "$AUTOGEN_SVC.cnf"
23
24
25# generate cert stuff
26if ! openssl genrsa -out $AUTOGEN_SVC.key 2048; then
27 echo "Failed to generate private key."
28 exit 1
29fi
30
31if ! openssl req -new -key "$AUTOGEN_SVC.key" -out "$AUTOGEN_SVC.csr" -config "$AUTOGEN_SVC.cnf" -subj "/C=GB/ST=MyCounty/L=MyTown/O=Random Tasks/OU=$AUTOGEN_ORGUNIT/CN=$AUTOGEN_SVC.local"; then
32 echo "Failed to generate CSR."
33 exit 1
34fi
35
36#openssl req -new -key $AUTOGEN_SVC.key -out $AUTOGEN_SVC.csr -config $AUTOGEN_SVC.cnf -subj "/C=GB/ST=MyCounty/L=MyTown/O=Random Tasks/OU=$AUTOGEN_ORGUNIT/CN=$AUTOGEN_SVC.local"
37
38if ! openssl x509 -req -in "$AUTOGEN_SVC.csr" -CA ca.crt -CAkey ca.key -CAcreateserial -out "$AUTOGEN_SVC.crt" -days 365 -extensions v3_req -extfile "$AUTOGEN_SVC.cnf"; then
39 echo "Failed to generate certificate."
40 exit 1
41fi
42
43# copy files to nginx folder
44cp "$AUTOGEN_SVC.crt" "$AUTOGEN_SVC.key" "$NGINX_PROXY_TLS_PATH"
45chmod 0444 "$NGINX_PROXY_TLS_PATH/$AUTOGEN_SVC.crt"
46chmod 0400 "$NGINX_PROXY_TLS_PATH/$AUTOGEN_SVC.key"
47
48# archive files
49mv "$AUTOGEN_SVC."* "$ARCHIVE_DIR"
50
51echo "SSL files created and archived successfully."
The script expects a template.cnf:
1[ req ]
2distinguished_name = req_distinguished_name
3req_extensions = v3_req
4
5[ req_distinguished_name ]
6countryName = Country
7stateOrProvinceName = State
8localityName = Locality
9organizationName = Organization
10organizationalUnitName = Organizational Unit
11commonName = Common Name (e.g., server FQDN or YOUR name)
12emailAddress = Email Address
13
14[ v3_req ]
15basicConstraints = CA:FALSE
16keyUsage = nonRepudiation, digitalSignature, keyEncipherment
17subjectAltName = @alt_names
18
19[ alt_names ]
20DNS.1 = changeme.home
21DNS.2 = changeme.local
It’s a little convoluted, but it works.
Other Containers
| Tool | Description | Notes |
|---|---|---|
| Open WebUI | AI interface, mainly for LLM-provider APIs and local LLMs | Need to set client_max_body_size 512M; in nginx config’s http block to allow larger file uploads for RAG. |
| Karakeep | Link/bookmark manager with AI tagging; very slick | |
| SearXNG | Metasearch engine that aggregates multiple search engines | Leveraged by Open WebUI’s web search feature. |
| Pi-hole/ Unbound | DNS server/ad blocker, setup with recursive DNS | More in DNS section |
Backups
Encrypted backups are managed by Kopia, pointing at cloud-based object storage. To ensure clean snapshots, containers are halted prior to backup, then restarted on completion. There’s a notification on backup - successful or not. At some point, I’ll stop the success notification, but it’s nice to be reassured!
Implementation Details
backup_and_notify.py
This is crude, LLM-generated script that shuts down containers, runs Kopia, then restarts the containers. Also, notifies to ntfy.sh
1#!/usr/bin/python3
2import subprocess
3import os
4import sys
5from requests import post
6from datetime import datetime
7
8# Environment variables setup
9os.environ['XDG_RUNTIME_DIR'] = '/run/user/1000'
10os.environ['DBUS_SESSION_BUS_ADDRESS'] = 'unix:path=/run/user/1000/bus'
11
12# Constants
13CURRENT_TIMESTAMP = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
14SERVICES = [
15 'openwebui',
16 'searxng',
17 'karakeep',
18 'nginx_proxy',
19]
20NTFY_URL = "https://ntfy.sh/abc123"
21USER = 'myuser'
22
23def notify(message, header, emoji):
24 """Send a notification with the specified message, header, and emoji."""
25 post(NTFY_URL, data=f"Server {message} at {CURRENT_TIMESTAMP}",
26 headers={
27 "Title": header,
28 "Tags": emoji,
29 })
30
31def run_command(command):
32 """Run a shell command and handle errors."""
33 process = subprocess.run(command, shell=True, text=True, capture_output=True)
34 if process.returncode != 0:
35 sys.exit(f"Error running command: {command}\n{process.stderr}")
36 print(process.stdout)
37
38def pause_services(services):
39 """Stop the specified services."""
40 for service in services:
41 print(f"Stopping {service}...")
42 run_command(f"sudo -E -u {USER} systemctl --user stop {service}")
43
44def unpause_services(services):
45 """Start the specified services in reverse order."""
46 for service in reversed(services):
47 print(f"Starting {service}...")
48 run_command(f"sudo -E -u {USER} systemctl --user start {service}")
49
50def run_kopia_snapshot(directory):
51 """Create a kopia snapshot of the specified directory."""
52 command = f"kopia snapshot create --config-file=/home/myuser/.config/kopia/repository.config --log-dir=/home/myuser/.cache/kopia {directory}"
53 try:
54 result = subprocess.run(command, shell=True, text=True, capture_output=True, check=True)
55 notify("backed up", "Backup success", "tada")
56 print(f"Snapshot of {directory} completed successfully.")
57 except subprocess.CalledProcessError as e:
58 notify("backup failure", "Backup failed", "hankey")
59 print(f"Error occurred while snapshotting {directory}: {e}")
60
61def main():
62 directory_to_snapshot = "/home/myuser"
63 try:
64 pause_services(SERVICES)
65 run_kopia_snapshot(directory_to_snapshot)
66 finally:
67 unpause_services(SERVICES)
68
69if __name__ == "__main__":
70 main()
Scheduling with systemd
Needs to be setup as root:
/etc/systemd/system/auto-backup.service
[Unit]
Description=Automatic backup with Kopia
[Service]
ExecStart=/home/myuser/backup_and_notify.py
/etc/systemd/system/auto-backup.timer
[Unit]
Description=Run backup at 3am
[Timer]
OnCalendar=03:00
Persistent=true
[Install]
WantedBy=timers.target
Enable and verify
systemctl enable auto-backup.timersystemctl start auto-backup.timersystemctl list-timers
Accessing Remotely
NordVPN Meshnet enables encrypted, point-to-point communication between devices across the internet as if they were part of the same LAN. Tailscale is a similar product, but as a NordVPN customer this was the obvious choice.
Implementation Details
This requires:
- a NordVPN account
- nordvpn command-line tool installed on the server
- Pi-hole for DNS
nginxreverse proxy
My main use case is accessing the server from my Android phone when away from home. For this to happen, the home server, and the phone need to be setup on the same meshnet.
On the mobile app, VPN connectivity is required to use Meshnet. A Custom DNS server needs to be configured - this should use the meshnet IP of the home server. Traffic on port 53 (i.e. DNS) will be routed to the Pi-hole container on port 3553 as mentioned in the Podman minutae, above.
The local network access and remote access Meshnet permissions must be enabled for the mobile device from your server.
In the Pi-hole DNS settings, Permit all origins needs to be selected. Domains for services to be served over meshnet should then be added to the Local DNS section, mapping the domain to the meshnet IP of the server on the port used by the service - e.g.:
| Domain | karakeep.home |
| IP | 100.123.232.10 |
| Port | 3000 |
The nginx reverse proxy will ensure TLS auth for the .home and .local, as can be seen in nginx-config.conf.
DNS
avahi is used to broadcast the server and its services on the .local namespace, but Pi-hole’s DNS is also used for advertising the services across NordVPN’s Meshnet. Each service has an avahi host published on the local network, pointing at the server’s host IP (routing to the specific service is handled by the nginx reverse proxy). Pi-hole is used with Unbound as its recursive DNS resolver, allowing queries directly to authoritative nameservers and bypassing my ISP
Implementation Details
Pi-hole and Unbound are configured to run in a single podman-compose.yml:
1# Adapted from https://github.com/patrickfav/pihole-unbound-docker/blob/main/docker-compose.yml
2version: '3.9'
3
4volumes:
5 etc_pihole-unbound:
6 etc_pihole_dnsmasq-unbound:
7
8services:
9 pihole:
10 container_name: pihole
11 image: pihole/pihole:latest
12 hostname: ${HOSTNAME}
13 domainname: ${DOMAIN_NAME}
14 depends_on:
15 - unbound
16 restart: unless-stopped
17 deploy:
18 resources:
19 limits:
20 memory: 768M
21 reservations:
22 memory: 256M
23 ports:
24 - 2443:443/tcp
25 - 3553:53/tcp
26 - 3553:53/udp
27 - ${PIHOLE_WEBPORT:-80}:80/tcp
28 # - 5335:5335/tcp # Uncomment to enable unbound access on local server
29 # - 22/tcp # Uncomment to enable SSH
30 environment:
31 - FTLCONF_LOCAL_IPV4=${FTLCONF_LOCAL_IPV4}
32 - TZ=${TZ:-UTC}
33 - WEBPASSWORD=${WEBPASSWORD}
34 - WEBTHEME=${WEBTHEME:-default-light}
35 - REV_SERVER=${REV_SERVER:-false}
36 - REV_SERVER_TARGET=${REV_SERVER_TARGET}
37 - REV_SERVER_DOMAIN=${REV_SERVER_DOMAIN}
38 - REV_SERVER_CIDR=${REV_SERVER_CIDR}
39 - PIHOLE_DNS_=192.168.0.188#5335
40 - DNSSEC="true"
41 - DNSMASQ_LISTENING=single
42 volumes:
43 - ./etc_pihole-unbound:/etc/pihole:rw
44 - ./etc_pihole_dnsmasq-unbound:/etc/dnsmasq.d:rw
45 dns:
46 - 127.0.0.1
47 - 9.9.9.9
48 - 1.1.1.1
49 networks:
50 pihole_dns_network:
51 ipv4_address: 172.21.200.2
52
53 unbound:
54 build: ./unbound
55 container_name: unbound
56 #user: "972:972"
57 hostname: unbound.local
58 restart: unless-stopped
59 ports:
60 - "5335:53/tcp"
61 - "5335:53/udp"
62 volumes:
63 - ./etc_unbound:/opt/unbound/etc/unbound/
64 networks:
65 pihole_dns_network:
66 ipv4_address: 172.21.200.3
67
68networks:
69 pihole_dns_network:
70 name: "pihole_dns_network"
71 ipam:
72 driver: default
73 config:
74 - subnet: 172.21.200.0/24
75 gateway: 172.21.200.1
76 ip_range: 172.21.200.1/24
Abandoned, for now…
| Tool | Description | Reason |
|---|---|---|
| Tooljet | Low-code app builder | No need for it at present |
| Joplin | Note taking app | Needed mostly for bookmarks and there are better tools for that |
| Linkding | Link manager | Nice tool, but replaced with Karakeep |
| n8n | Workflow tool | No need for it at present, but will likely reintroduce at some point for AI agent prototyping |
| Flowise | AI workflow tool | Prefer n8n as it’s more general |
Anyway, that’s enough information for nobody but search engine crawlers to ever read.