A while ago I was shopping around for a reverse-proxy to use for managing my containers and I landed on Traefik. I’ve used nginx a lot in the past and it’s still great, but I figured it’s time to get more comfortable with container orchestration stuff like labels and service discovery so this seemed like a good idea.
Here’s the thing though, I don’t really use it for its intended purpose as an ingress-controller because I don’t manage my containers with k8s, I use ansible+systemd because my time on earth is precious and I have better things to do. Anyway, here is my more simplified application of this complex tool.
Traefik container
I deploy it in a container with ansible thusly:
~/projects/mine/gwt/traefik.yml
- hosts: gwt
gather_facts: false
vars:
application: traefik
podman_network: "{{ networks.luxurious_lair }}"
crowdsec_traefik_bouncer_key: "{{ vault_crowdsec_traefik_bouncer_key }}"
joker_api_mode: "{{ vault_joker_api_mode }}"
joker_username: "{{ vault_joker_username }}"
joker_password: "{{ vault_joker_password }}"
tasks:
- name: Create config folder
ansible.builtin.file:
path: "{{ config_directory }}"
state: directory
owner: "{{ common_user }}"
group: "{{ common_group }}"
mode: "0770"
- name: Template static traefik config
ansible.builtin.template:
src: traefik/traefik.yaml.j2
dest: "{{ config_directory }}/traefik.yaml"
mode: "0440"
- name: Template dynamic traefik configs
ansible.builtin.template:
src: "traefik/conf.d/{{ item }}.j2"
dest: "{{ config_directory }}/conf.d/{{ item }}"
mode: "0660"
with_items:
- middlewares.yml
- service1.yml
- service2.yml
- service3.yml
- name: Create log folder
ansible.builtin.file:
path: "{{ logging_directory }}/{{ application }}"
state: directory
owner: "{{ common_user }}"
group: "{{ common_group }}"
mode: "0770"
- name: Create acme.json
ansible.builtin.file:
path: "{{ config_directory }}/acme.json"
state: touch
mode: "0600"
modification_time: "preserve"
access_time: "preserve"
- name: Create ban.html
ansible.builtin.file:
path: "{{ config_directory }}/ban.html"
state: touch
mode: "0600"
modification_time: "preserve"
access_time: "preserve"
- name: Create container
ansible.builtin.include_role:
name: podman_container
vars:
image: traefik:v3.4.1
env:
JOKER_API_MODE: "{{ joker_api_mode }}"
JOKER_USERNAME: "{{ joker_username }}"
JOKER_PASSWORD: "{{ joker_password }}"
TZ: "{{ common_timezone }}"
volumes:
- "{{ config_directory }}/traefik.yaml:/etc/traefik/traefik.yaml"
- "{{ config_directory }}/conf.d:/etc/traefik/conf.d"
- "{{ config_directory }}/acme.json:/acme.json"
- "{{ config_directory }}/ban.html:/ban.html"
- "{{ logging_directory }}/{{ application }}:/var/log/traefik"
ports:
- "80:80/tcp"
- "443:443/tcp"
generate_systemd:
path: "/home/{{ common_user }}/.config/systemd/user"
The important bits are the auth tokens for my dns provider and firewall bouncer, and the dynamic configuration scripts. Since all my containers are rootless podman, I don’t use the docker provider and opt for the simpler file provider. Its much easier to wrap my head around static configs and gives me confidence about what is being exposed. Yes I know about service autodiscovery and no I don’t want to use labels, yet, even though I mentioned it earlier. If a container does need a label for some reason I slap it on with the ansible module. For now this works nicely, and each service can have it’s own dynamic config in a separate file by enabling the file provider to watch a mounted directory:
~/projects/mine/gwt/templates/traefik/traefik.yml.j2
...
providers:
file:
directory: /etc/traefik/conf.d
watch: true
experimental:
plugins:
bouncer:
moduleName: github.com/maxlerebourg/crowdsec-bouncer-traefik-plugin
version: v1.4.2
...
Dynamic configs for services
Another option, still much better than running a docker agent, is to volume mount the socket file at /var/run/user/1000/podman/podman.sock
for networking between containers, but I just deploy all the containers on the same Podman network and call it a day. Traefik can reach them at http://containername thanks to the slirp4nets network mode in Podman. I’ll switch to pasta later, probably, maybe.
This way I can just expose services when I need them via ansible. Note that the container does not need any exposed ports, slirp4netns handles that. Here is an example dynamic config for a Jellyfin service:
~/projects/mine/gwt/templates/traefik/conf.d/jellyfin.yml.j2
http:
services:
jellyfin:
loadBalancer:
servers:
- url: "http://jellyfin:8096"
routers:
jellyfin-router:
rule: 'Host(`jellyfin.{{ common_tld }}`)'
service: jellyfin@file
entryPoints:
- web-secure
middlewares:
- chain-authelia
- middlewares-jellyfin
tls:
certResolver: letsencrypt
options: modern@file
domains:
- main: '{{ common_tld }}'
sans:
- '*.{{ common_tld }}'
jellyfin-router-bypass:
rule: 'Host(`jellyfin.{{ common_tld }}`) && Header(`traefik-auth-bypass-key`, `super-secret-key`)'
service: jellyfin@file
priority: 100
entryPoints:
- web-secure
middlewares:
- middlewares-jellyfin
- chain-no-auth
tls:
certResolver: letsencrypt
options: modern@file
domains:
- main: '{{ common_tld }}'
sans:
- '*.{{ common_tld }}'
jellyfin-local-router:
rule: 'Host(`jellyfin.{{ common_local_tld }}`)'
service: jellyfin@file
entryPoints:
- web
I expose this container via 3 routers:
- to the internet but behind Authelia SSO at https://jellyfin.mytld.com for access from web browsers.
- again to the internet, but bypassing authelia with a header token so that monitoring agents on my rasperry-pi (UptimeKuma and VictoriaMetrics) can still reach it over https.
- and finally just over http on the LAN. I have a rewrite rule on my Adguard DNS server that will redirect queries for http://jellyfin.mytld.lan to this container, which is useful for my TV and when I’m at home in general.
Static config
In my other post I talked about how I use HAProxy and wireguard to tunnel ipv4 traffic to my homeserver. Here is where the magic happens via the proxy protocol:
~/projects/mine/gwt/templates/traefik.yml.j2
entryPoints:
web:
address: ':80'
web-secure:
address: ':443'
forwardedHeaders:
trustedIPs:
- "{{ vps_public_ipv4 }}"
- "{{ vps_public_ipv6 }}"
- "{{ wg_subnet_ipv4 }}"
proxyProtocol:
trustedIPs:
- "{{ vps_public_ipv4 }}"
- "{{ vps_public_ipv6 }}"
- "{{ wg_server_ipv4 }}"
http3:
advertisedPort: 443
This lets me do the SSL offloading on beefier hardware and basically just keep the VPS for the static ip and VPN tunnel. Overall I’ve come to enjoy using all these Go tools, even though I think the language syntax is hideous to read and yaml misconfigurations will break it in extremely non-obvious ways. Oh well, at least its pretty fast.