A self-hosted, federated soundcloud. Funkwhale.

My desire to self-host all the services I currently pay for continues unabated and Soundcloud is next. Although it remains a great place to discover artists and share music, for my purposes (uploading my terrible mixes) it’s overkill. Those €100 I can redirect elsewhere, in this case a Plex lifetime pass for even more self-hosting haha.

Anyway, Funkwhale has been on my radar for a while and I decided to add yet another service to the raspberry pi at home. I wish I could say the process was easy and fast, because it was the exact opposite. Although the documentation is extensive, I encountered a huge variety of hardware, software and miscellaneous problems that turned this into a huge timesink. Along the way I learned a couple of things about janitoring services and about myself, namely that my time is clearly worthless. In no particular order, here are some of the problems I encountered.

  • ARM architectures (the raspberry pi) do not support most docker images. You’ll either have to extensively tweak and rebuild the image, or hope someone has done that for you. The errors are cryptic, the build process is slow, and the whole value-prop of docker saving you time goes out the window. To funkwhales credit there are a bunch of clear instructions in the docs, but I still ultimately failed and went with the python installation.
  • Someone tipped me off that Yunohost comes with Funkwhale already bundled, but that only led to other weird-ass problems like not detecting the correct external ip associated with my domain. For some reason that I don’t care enough to discover, yunohost resolves the ip of the pi by calling out to ip.yunohost.org, which returns something completely different that curling https://api.ipify.org or other public ip reporting services. My existing tried & tested method for performing dynamic dns on this domain is to simply curl once more to the cloudflare API in order to update the A record when my isp issues a new external ip.
  • Lots of troubleshooting the external hard drives: undervoltage errors and mysterious unmounting (despite having their own power supplies, which lead me to conclude that the power supply of the pi itself wasn’t up to snuff), bad superblocks (never buy Western Digital products, but I knew this already). Ultimately I just jammed a 16gb usb key in there and called it a day. The syncthing agent will automatically pull down any new files from the appropriate folder on my desktop, where the mixing deck is connected. Oh, and if you ever get around to using syncthing seriously, absolutely do manually specify which device is pull-only, and definitely do not accidentally leave the default bi-directional sync on. Like me, you will regret it. Oh, and turn on versioning.
  • Funkwhale will refuse to import music tracks without metadata or tags. Do you know which types of music files are generated without artist, year, genre or title? That’s right, mixes. Thus, you need to run MusicBrainz Picard and manually specify metadata.
  • One of the appealing features of soundcloud is that you can listen without registering an account, which is contrary to the Funkwhale use-case: you are meant to register an account on some instance which grants you listening access. I want my mixes anonymously accessible, so I had to tweak the surprisingly unintuitive permissions of funkwhale in order to allow that.
  • There’s probably a bunch more annoying things I solved along the way that I forgot about but hey, that’s why they pay you the big bucks in IT; the capacity for dealing with continous frustration.

After much struggle, the service is now live over at https://choons.rpavlov.com/library/

Bye gmail

Although it’s been a long time coming, I finally started the process of de-googling. The crux of it is: storing personal data in North America is a bad idea. Pretty simple really. Also, there is always the possibility of them revoking access to your accounts for whatever reason and providing no support, which is far less likely to happen with a provider you pay for.

What this looks like for me personally is

  1. Move my gcloud storage backups of devices, pictures, docs etc onto my home NAS instead of, you know, the cloud. Download everything from google drive/docs/sheets/pics.
  2. Setup forwarding and export emails, calendar, contacts to https://mailbox.org. Download old emails via thunderbird and re-upload to mailbox.org.
  3. The tedious part: login to all services that are registered under gmail addresses and change them to a relevant alias of the main @mailbox.org account. Among the neat features that caught my eye is the ability to have a catch-all custom domain. This means that anything directed to [email protected] will still hit my inbox.
  4. After a week or two after forwarding was setup visit https://security.google.com/settings/security/deleteaccount and then delete all google apps.

The next move is to migrate from iCloud as well.

My Gitbook knowledgebase

My thirst for knowledge is at an all-time high, and coupled with the catnip of personal productivity and organization, has led me to finally compile a (public) knowledge base. Previously I tried out the Zettelkasten system but sort of couldn’t get it to stick. Or rather, I failed to form a habit of building it out. Also, simply having all the markdown files in a repo didn’t really inspire me to re-read or re-visit them in order to solidify learning. With tools like Notion, the ability to structure and change the layout of a page is really cool, but ultimately just got in the way of writing and served as a distraction. Hijacking the / character also just added more friction.

I decided to give Gitbook a shot, so you can now peep my notes over at https://knowledge.rpavlov.com

Ultimately, what I am after is a stronger motivation to write more. Emptying my mind and consolidating+solidifying knowledge about various concepts is also a nice bonus. Finally, sharing useful information and learning also feels pretty good.

Live over Tor

One of the things I’ve been meaning to do for a while is gain a better understanding of the onion protocol and how hidden services are hosted. I got about 50% through the Tor whitepaper before deciding to pivot to some practical applications, so this blog is now mirrored at http://xzjjcvowtdunfx4z6dkeund7sjvt3k7nphgcfdusy64smyqpmdusmpad.onion/. It’s v3, which can be quickly inferred from the 56 character address.

The steps

The static files for this blog reside on a 3rd gen Raspberry Pi and are served by nginx, so we’ll setup Tor there as well.

Installing Tor requires either building from source, or adding some 3rd party APT repositories. I used the repos because building will take a while. There are some gotchas, however, since Raspbian differs from Debian in a few ways. More detailed info here.

In short, it’s sufficient to add the following to your /etc/apt/sources.list. The [arch=amd64] is important else your apt get update will fail. Also pay attention to your raspbian version. Is it jessie, stretch or buster? Double check with cat /etc/os-release.

deb [arch=amd64] https://deb.torproject.org/torproject.org buster main
deb-src [arch=amd64] https://deb.torproject.org/torproject.org buster main

Add the package signing GPG keys.

curl https://deb.torproject.org/torproject.org/A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89.asc | gpg --import
gpg --export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | apt-key add -

Hope this works.

apt update && apt upgrade
apt install tor deb.torproject.org-keyring

Next, ensure the hidden service dir /var/lib/tor/hidden_service exists and has the correct permissions.

sudo chown -R debian-tor /var/lib/tor/hidden_service
sudo chmod 700 /var/lib/tor/hidden_service

Edit /etc/tor/torrc to reflect the setup

HiddenServiceDir /var/lib/tor/hidden_service
HiddenServicePort 80

Finally, add a vhost to nginx. Your web root may vary so double check that.

server {
    root /var/www/html;
    client_max_body_size 32M;
    charset utf-8;
    index index.html;

Now, do a service tor restart && service nginx reload and you should now find a hostname and private_key file in /var/lib/tor/hidden_service. Your hostname is your onion url so go ahead and test that. If something went wrong, which is highly likely, tail your /var/log/syslog, /var/log/nginx/access.log and /var/log/nginx/error.log for hints. Apparently Tor is supposed to generate logs in /var/log/tor but mine is empty.

Ah, one more thing

Setting up a middle/guard relay is pretty trivial, and a great way to help out the community. Add a few lines to /etc/tor/torrc and ensure port 9001 is reachable via https://canyouseeme.org/. Then restart, and we’re good to go. Keep tailing syslog however, since it will indicate if things go wrong.

ORPort 9001
ExitRelay 0
SocksPort 0
ControlSocket 0
ContactInfo [email protected]
Nickname CiaSurveillanceVan

After about an hour we can verify the relay is operational at https://metrics.torproject.org/rs.html#search/CiaSurveillanceVan. Fantastic.

By hosting a hidden service however, I’ve now created a new set of problems for the poor pi, and by extension my home network connection, in the form of eventual ruthless DDoSing. These Posts discuss the problems & solutions, and are a good starting point for hardening the home network.

In theory though, v3 addresses should be impossible to crawl unless publicly advertised. Still, for my peace of mind it’s best to take precautions. For now the first step is getting some better network monitoring in place, which from my understanding is Munin. Updates to follow, as they’re implemented.

Bonus pro-tip

Now would be a good time to run a backup of the Pi’s SD card, to save our hours of toil. I have an additional usb drive attached and mounted so I will drop it there. Note that your SD card might have a different name, so check with lsblk.

dd bs=4M if=/dev/mmcblk0 of=/media/pi-space/sd-card-backup-2021-02.img


The evergreen home networking blog post

I’ve reached the point in my life where I find home networking fun. It’s either that or a model train set. The catalyst was finally setting up an Open-WRT router, which gives me options. The ability to put the Vodafone KabelBox modem in full bridge mode is mandatory for proper NAT resolution and to access local services from the outside. The hosting hub is a Raspberry Pi 3 with a number of additional services like nginx.

Open-WRT & KabelBox

Luckily Vodafone are nice enough to provide the option to toggle bridge mode on their router/modem devices. It was deeply buried here.

However, after enabling it my internet connection died. I noticed the router was being assigned an ip on the WAN interface from the gateway (modem), but I couldn’t ping through to any DNS servers. After investing several hours reading the open-wrt docs and searching online I noticed a tip about restarting the cable modem in order to assign the router a new ip for the WAN interface (one that isn’t from the old 192.0.1 pool, but instead something fresh from the ISP, in a completely different subnet). This interface remains running as a DHCP client which is the default. In fact, no additional configuration outside of wifi setup had to be done for the router. Anyway once again the rule of thumb about computers is if in doubt; restart.

Cloudflare & DNS

On the Cloudfront side, I had to create a couple of records and then use my Global API key (Token didn’t work for some reason) in order to run a Crontask which updates the A record whenever the router receives a new public IP from the ISP. We want this router accessible from the internet. Hackers welcome I guess. The CF DNS records consist of an A record initially set to anything, but subsequently updated by the crontask running on open-wrt. An A record of dynamic-> router-ip and a CNAME record of rpavlov -> a-super-secret-subdomain.rpavlov.com are created. Pinging either one will resolve to CF’s server, so my ip is nicely hidden and protected from DDoS. The steps are more clearly outlined here https://github.com/dcerisano/cloudflare-dynamic-dns . Finally, add a Page Rule to redirect all traffic to https.

Raspberry Pi

  • Disable ssh password. While we’re at it, only allow ssh into the router from the LAN interface.
  • We need to reserve a static ip for the pi in the LAN. Easily done through the open-wrt Luci web interface.
  • We need to expose port 80/433 in the router firewall from the WAN interface, and route it to the Pi local ip on the LAN interface. Additionally we need to remap the port that the router’s Web interface server (uHttpd) is running on to something else in order to free up those ports.
  • Issue an SSL cert with Certbot, by authentication through a TXT file in the webroot.
  • Drop the static blog files in /var/www/html.
  • Add my hardened nginx config.

Dropbox replacement

Dropbox can’t really be trusted, so I trade reliability and security for dubiously decreased paranoia. Instead I use Syncthing which runs on each client device (laptops, PC) and doesn’t require hosting. It works out of the box flawlessly.


As easy as running curl -sSL https://install.pi-hole.net | bash on the pi, and following the steps. They are also kind enough to explain why curl to bash, and give you options if you want to audit the scripts yourself. Afterwards, set the DNS servers of each device on the home network to point to the pi’s ip. Unfortunately the Vodafone router does not provide the option to set DNS servers, so this has to be done per device.

Next up

  • Funkwhale for streaming my own music files outside of Spotify.
  • A Tor relay.
  • This blog as a hidden service, which is pretty straightforward actually: Download Tor, modify the .../tor-browser_en-US/Browser/TorBrowser/Data/Tor/torrc to point to nginx, then access at the relevant vhost+port.


Much respect to the legion of clever elves who made all this software magic possible. Throughout the course of writing this post I also realized I’ll probably be touching computers until the day I die, which seems like a mixed blessing.

Update 2020-11-30

The dream is over. My internet speed plummeted when using the router, probably because it’s running on 10 year old hardware. In any case, it turns out I can still just forward ports 80/443 from outside to the static ip of the Pi, as well as run the cronjob to update the A record with the public IP of the Vodafone Router. One other thing I added was uptime checks from Google Cloud. Godbless.

1 of 6 >>
© 2021 Roumen Pavlov. All rights reserved.