Martyn's random musings

Recently I was asked about my android phone setup where I don't have Google Play and yet seem to have no real issues with life without the Big G.

What prompted this ironically was a time where I actually did have an issue, but the fact that it is so rare is worthy of note, so have a note!

First, the hardware

I run a Poco 5G and this took some research as most handset manufacturers are refusing to unlock bootloaders and I want a relatively modern experience, not a slow ancient (in phone terms) device.

Poco is a sub-company of Xiaomi but it's easier to unlock them than some Xiaomi devices because they're a smaller company.

To unlock the phone required me to buy the phone, attach it to my PC, request unlock code, then put it back in it's box for 2 weeks. Yep, you read that right, they force a wait of 2 weeks to basically attempt to get you to use the phone the way they want you to. I persevered and continued 2 weeks later.

Then, the OS

I use LineageOS which is the successor to CyanogenMod which stopped being so open when they became a company and there was drama. Let's not go into that.

It is possible as with any custom “rom” to install the Gapps package, sign into a google account and everything work, but that's not my style. So no Gapps. Instead I use a project called MicroG.

Setting up MicroG can be a bit of a pain, and I can't really remember what I did to get it completely setup but once it's set up, it's solid.

What?! MicroG uses google servers!

Yep, that's an irritating fact of life. Almost anything that wants to send push messages to an android device without their app draining battery uses google servers, so you are a bit stuck here. I prefer that than a signed-in google account though, I can at least tell myself that it's harder to correlate that way. More on push later though.

It however doesn't use Google Maps or Play Services and instead reimplements these. There is a “fake store” app that you install and it uses OpenStreetmap for the mapping side of things.

Then I sideload using “Browser” (yep that's the AOSP browser right there) f-droid and we start installing apps.

And what apps would those be? f-droid only does open source, what about my bank, public transit etc.

Hold your horses, we're getting there. First, we get the opensource apps and set up the niceties like contacts and calendar sync.

  • DAVx5 allows me to sync my contacts and calendar with my NextCloud account. I prefer to use this than the NextCloud app because it uses standards-driven stuff. I'm also used to doing it this way since in the early days OwnCloud didn't have an app.
  • FairEmail – I use this because k9mail always had trouble with push notifications over IMAP and surprise surprise, I don't use Gmail on my phone (my legacy gmail address forwards to either my zoho or yunohost mailbox)
  • FakeStore – as mentioned above, this is for allowing stuff to think it's installed by the Play Store.
  • FFUpdater – this is an odd one, but it installs various browsers and keeps them updated – I use it for Bromite when I have to use “chrome”, Firefox Klar and Tor Browser.
  • Molly – I use this signal fork, and to get it I had to add a separate repo to f-droid.
  • NewPipe – a nice YouTube frontend that again, doesn't require a login.
  • ntfy – This one rocks. I self-host an server and my Tusky notifications go via this rather than google's servers – a growing standard called UnifiedPush enables this and it's worth checking out.
  • OrganicMaps & OsmAnd~ – Organic is a lovely front-end to OpenStreetmap data, with routing and even transit routing built-in. It's really quite good. It will keep getting better I hope. I still have OsmAnd because whilst it's transit routing is slower and ostensibly worse, it does allow for more variations.
  • QR Scanner (PFA) – this is a nice QRCode scanner that actually shows you the url before going to it.
  • Telegram FOSS – yup, telegram is on f-droid.
  • Tusky – I'm a fediverse fedizen and this is my preferred client.
  • UntrackMe – This redirects links sent to me from twitter, reddit, youtube etc. to nitter, teddit, invidious etc. Really nice tbh.

Enough already, what about my banking app?

Okay, okay. The last app that I install from f-droid is the AuroraStore app. Until about a month ago it was perfect, but now, the search is somewhat broken. I might create a disposable gmail account to fix that. What it does is basically logged-out playstore.

If you have correctly set up microG – there's an app called microG Settings that is installed as part of flashing microG that has a handy Self-Check option – you're good to just install all the horrible tracker-filled software you want from play.

It even downloads the packages from google's servers – this is not ye olde skool “search and hope that it's the real apk”.

I have lots of things installed this way, and whilst they'll be full of trackers, some are actually necessary for my lifestyle.

  • My banking apps – I have n26, tomorrow bank and revolut as backup banks in Germany, and they all “just work”.
  • 1password – much as I love the company I doubt they will want an opensource client
  • BVG Fahrinfo – the Berlin transit app, again, just works.
  • Chwazi – this one is for the boardgamers, it's purely for choosing who goes first.
  • DB Navigator – the German railways app, very nice to have for train travel without paper.
  • FreeNow – Taxi app that works in Berlin with the real Taxis and has deals with Uber and other non-taxi rideshare drivers.
  • Slack – yeah, kinda nice to have that.
  • ÖBB tickets – again, I often use Vienna public transport. Just works.

Really, just works?

Okay, so the other day I had an issue where DB left me stranded in a town where FreeNow didn't have coverage so I had a bit of an emergency because there were no taxis and I didn't have Uber installed (still don't really know if it would have helped tbh). The search on Aurora wasn't working and I couldn't get it to trigger from the play store (which it can do, but google is trying to make it hard).

Whilst I was getting microg working, I did have trouble with the taxi app because it wasn't getting the mapping right and so would tell the taxi company I was a long way from where I was ordering and at the time they were not letting you order where your phone was not. That's a bad policy imo but also once I was getting green ticks throughout my MicroG settings, it's been glitch-free since.

Am I getting the best most privacy-respecting experience with zero hassle? No. Is it convenient enough for me, yes. Is it privacy-respecting enough for me, well, I want better, but for now, yes.

If you want to comment feel free to tag in in your favorite fediverse client :–)

I've split the list into some sections :

Useful for most folks interested in self-hosting

  • cryptpad – Collaborative “office type” suite, entirely clientside encrypted drive, docs, spreadsheets etc. very useful when sharing info with my parents that really shouldn't be public
  • funkwhale – I make music sometimes, and this is a federated music site akin to soundcloud.
  • gotosocial – a lightweight #fediverse #mastodon style server. I currently am having some issues with it, so my main account is on
  • hauk – Remember google latitude? where you could share your location with a friend on google maps and actually walk toward them and have it update? This is a self-hosted implementation of that kind of service.
  • logitech media server – a completely offline multiroom audio capable music library. I have devices scattered through the apartment, and I can play music to all of them at once with them all in sync.
  • nextcloud – very extensible personal “cloud” storage – I use this for ensuring my password manager is synced (webDAV), and my phone contacts (cardDAV) and calendar (calDAV) are up-to-date.
  • ntfy – a UnifiedPush self-hosted notification system. It allows tusky to ask for notifications without google being involved, and also allows me to send push messages from scripts (e.g. attached to flexget) or other systems such as uptime-kuma
  • owncast – a single-user twitch or youtube live replacement. Allows me to stream if I'm in the mood for that, without using a third party. NOTE: this is one of the few things I have on a free tier hosting platform as well because it does run better there.
  • peertube#peertube is video hosting, like youtube, only better, decentralised and federated #fediverse
  • pihole – access the internet without the trackers and adverts.
  • syncthing – an alternative way of syncing data between machines. Great for folders of files such as obsidian or other notetaking apps.
  • whoogle – This is SO good. Google search without the javascript tracking. Okay, they can still track my IP, but it's so much better. I love the idea of duckduckgo and others, but they just don't get the results I need. I use libredirect to redirect any google results pages to whoogle, so even if someone gives me a link to a google search, I get to my whoogle instance.
  • writefreely – this blog, running WriteFreely

Useful for the more geeky folks (programmers, arduino tinkerers, home automation geeks etc.)

  • argocd – a #continuousDelivery platform, defines what is running in kubernetes in git and keeps it in sync with the git repo. Allows me to rebuild the cluster from scratch in theory.
  • drupal – I have a very old site that I want to keep online, and ~15 years ago I migrated the content to drupal.
  • code-server – vscode, but not built by microsoft, and in the browser. Allows me to access my dev environment anywhere I have a browser.
  • (No longer running) domoticz – a home automation system written in c(++?) that used to run my home automation
  • (Deprecated) drone – a CI system I no longer use for new projects because they switched to a proprietary license
  • echo-server – a useful piece of debugging software that tells me how a request actually looks when it reaches a pod in the cluster
  • esphome – a way to expose esp8266/esp32 devices to HomeAssistant for automation purposes without writing code.
  • flexget – a download manager that lets me listen to rss feeds and download the files within them. I need this for reasons.
  • gitea – Fork of Gogs – a nice little git server that looks and feels a lot like github.
  • homeassistant – Home automation, done pretty well, and entirely offline (it can be online but the way I run it isn't).
  • hyperion – creates an “ambient light” system from video streams – this is used to make my TV have an “ambilight” style colourwash behind it that changes based on the scene. I use wled, an esp8266 and a string of ws2812 addressable LEDs to achieve the lighting, and to allow my LG tv to capture it's screen and send it to hyperion.
  • jellyfin – My media library, completely offline, sat on my drives here, not on some cloud server.
  • karaoke-eternal – My karaoke library exposed as a web service so I can plug a web browser into a projector or TV and create a karaoke party.
  • minio – S3 compatible storage allowing me to upload files of any kind and link to them.
  • mosquitto – I like to string things together using MQTT. Instead of letting homeassistant provide this, I point homeassistant at my own MQTT server.
  • nodered – When I don't want to write code to automate my homeassistant stuff, this allows flow-based “programming” of things. e.g. if the smoke alarm above my 3d printer makes a loud noise, the esp8266 next to the fire alarm detects this, sends an MQTT message to mosquitto, and node-red sees this and sends a signal to turn off power to the printer.
  • (deprecated) openvpn – a VPN solution that I don't really use any more but is slightly more supported than wireguard, the one I actively use. It's also a lot easier to debug.
  • renovate – monitor your repos for outdated dependencies. a lot like dependabot. Works with gitea nicely.
  • rhasspy – OFFLINE VOICE ASSISTANT! This is super cool and I haven't done enough with it, fairly new to me, but you can use the ai-thinker esp32 boards as satellites to a server with ESP32-Rhasspy-Satellite.
  • uptime kuma – a system that checks if services are up and notifies if they are not, a lot like pingdom or uptime robot. NOTE: this is one of the few things I have on a free tier hosting platform as well because it lets me know if my home network is reachable from the internet then.
  • wg-access-server – the first easy reliable wireguard (VPN) setup I've seen. Works well, allows me to enroll devices onto my network super easily.
  • woodpecker ci – a fork of the drone CI system which allows me to automatically build code on a git push.

Useful for just me

  • ledcontroller – I wrote this to make pretty patterns on an LED wall behind me when I'm streaming. The wall is wled on an esp8266 with strips of ws2812 leds

Part of the setup of my home kubernetes cluster

  • cert-manager – Automatically gets me https certificates so everything I expose to the internet gets a nice certificate
  • cluster-ingress – NGINX Kubernetes ingress controller that allows me to direct all inbound port 443 + 80 traffic to the appropriate service
  • external-dns configures my dns records by what I put in my ingresses so I don't have to manually create dns records.
  • grafana – dashboards for my prometheus monitoring.
  • longhorn – a system that allows me to use all my disks in all the nodes on the cluster to create PVCs.
  • metallb – by defining a kubernetes service of type LoadBalancer, it gets an IP on my home network.
  • prometheus – gotta have that observability on the system. Super overkill for home use but hey, I like having it.
  • samba – I expose some PVCs to my local network for windows based PCs.
  • vault – I'll be honest here, I have never got this working the way I would like, and so my secrets are not as well encrypted.

Oh, and the cluster is running on very minimal debian boxes with k3s as the kubernetes cluster software.

Do you want to try an Owncast stream but don't have a VPS and don't want to spend money on it? How about we set up one on Oracle's “Always-Free” Tier?

First question out of the way: Is Oracle's “Always-Free” Tier good enough to run Owncast. Simple answer: yes.

Longer answer you can skip if that satisfies you : We can either run their VM.Standard.E2.1.Micro class machine which has 1Gb ram and a 2Ghz vCPU (ish) and that should be enough or an ARM machine with up to 4 CPUs and 24 Gb or RAM. Given Owncast has an arm distribution, that's pretty incredible and works perfectly fine for Owncast.

Is this the best way to set up Owncast on Oracle? No! I'm doing this the easy way, and as an SRE person I would say “nooooo, don't do that, this is production, no clickops”, but I'm making this easy for streamers who are not so techie.

So, without further rambling, here's the prerequisites and the how-to.

Prerequisites: – A credit card that Oracle accepts. Seems to be Visa, Mastercard and Amex. Not sure if debit cards for those networks work, but I see no reason why not, they're mostly after ID verification here to stop spammers and bots creating accounts. – A domain name that you can control – that can be annoying, but you should be able to use a service like if you don't have your own domain name. You probably do want a real domain name as a streamer though, just saying...

So from the top:

  1. Sign up for an Oracle cloud free account – – The steps are really quite simple and other than the card payment which was less than a euro for me and refunded, painless. Be sure to choose the region closest to you as your home region because that will cut down latency to the Owncast box.
  2. Sign into your Oracle cloud account. Note that it's frustrating that they use two different usernames. The first is sent to you in the email, and you need to enter that to get to the real login page, then the second is your email address.
  3. Navigate to create an instance – Burger menu –> Compute –> Instances then click Compartment and choose the one marked (root) and click “Create Instance”
  4. Change the shape of the instance to meet our needs. I suggest using Ubuntu as they don't have Debian (my usual preference) and I recommend using half of your free tier allowance, allows you to use the other half if you need to try a new version out for instance.
    • Click “Edit” next to Image and shape
    • Click “Change Image” and select “Canonical Ubuntu” and click “Select Image”
    • Click “Change Shape” and select “Ampere” and “VM.Standard.A1.Flex”. Scroll down a little and change the “Number of OCPUs” to 2. The Memory should automatically change to 12, but if it doesn't set that as well. It's overkill but better than being too strict. Click “Select Shape”
    • Under “Add SSH keys” either click on the “Save private key” and “Save public key” links or if you know what an ssh key is and you already have one, click on upload or paste public keys.
    • Click “Create”
  5. Once your newly created machine has booted, you should then allow web traffic to it. This is in a couple of places (well done to Oracle for making this secure by default, but it makes for a longer document here!)
    • In the instance page click on the blue link next to “Virtual cloud network” under Instance details.
    • Scroll down to “Security Lists” on the left hand side
    • Click “Add Ingress Rules” and fill in the following details :
    • – Source Type : CIDR
    • – Source CIDR :
    • – IP Protocol : TCP
    • – Source Port Range : LEAVE BLANK
    • – Destination Port Range : 80
    • – Description : (Optional but my recommendation) HTTP
    • Click + Another Ingress Rule and fill in almost the same details
    • – Source Type : CIDR
    • – Source CIDR :
    • – IP Protocol : TCP
    • – Source Port Range : LEAVE BLANK
    • – Destination Port Range : 443 This is the difference from the previous rule
    • – Description : (Optional but my recommendation) HTTPS
    • Click + Another Ingress Rule and fill in almost the same details
    • – Source Type : CIDR
    • – Source CIDR :
    • – IP Protocol : TCP
    • – Source Port Range : LEAVE BLANK
    • – Destination Port Range : 1935 This is the difference from the previous rule
    • – Description : (Optional but my recommendation) RTMP
    • Click Add Ingress Rules
    • Go back to the Burger menu, click Networking and Virtual Cloud Networks.
    • Click the VCN in the table (blue text, should be something like vcn-20220909-2138)
    • Click on Network Security Groups, click Create Network Security Group and give it a Name (I chose allowown because it's, allowing owncast, I'm creative like that) and click Next.
    • Enter almost exactly all the above stuff over again. 2/3 of network access is then done...:
    • – Direction: Ingress
    • – Source Type : CIDR
    • – Source CIDR :
    • – IP Protocol : TCP
    • – Source Port Range : LEAVE BLANK
    • – Destination Port Range : 80
    • – Description : (Optional but my recommendation) HTTP
    • + Another rule
    • – Direction: Ingress
    • – Source CIDR :
    • – IP Protocol : TCP
    • – Source Port Range : LEAVE BLANK
    • – Destination Port Range : 443 This is the difference from the previous rule
    • – Description : (Optional but my recommendation) HTTPS
    • + Another rule
    • – Direction: Ingress
    • – Source CIDR :
    • – IP Protocol : TCP
    • – Source Port Range : LEAVE BLANK
    • – Destination Port Range : 1935 This is the difference from the previous rule
    • – Description : (Optional but my recommendation) RTMP
    • Create
    • Now we need to assign it to the instance NIC so Burger Menu –> Compute –> Instances
    • Click on the instance in the table
    • Network Security Groups: edit
    • + Another network security group
    • Select a value –> allowown –> Save changes
  6. Now is a good time to assign a DNS name to the IP of the machine. This is pretty much out of scope but make an A record to the Public IP address listed in the instance machine. I'll use for an example from here.
  7. Okay, so all the “infrastructure” is done now, we just need to connect to the machine by ssh and install Owncast and Caddy (this does the https stuff for us). First, connect via SSH to the Public IP of your machine using the ssh key you downloaded or used in creation. Again, fairly well documented elsewhere, feel free to use PuTTY or other client, but this is somewhat out of scope here.
  8. The third and final network allow is to edit the iptables firewall on the ubuntu virtual machine. This is annoyingly a text edit, here's a sed line that should work, but you basically need to duplicate the line that ends with 22 -j ACCEPT three times, and change on the new lines 22 to 80, 443 and 1935.
    • Copy pasta: sudo sed -i s/"\(^.*22 -j ACCEPT\)"/"\1\n-A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT\n-A INPUT -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT\n-A INPUT -p tcp -m state --state NEW -m tcp --dport 1935 -j ACCEPT"/ /etc/iptables/rules.v4
    • OR sudo nano /etc/iptables/rules.v4, duplicate the 22 line with 80 and 443 instead. LEAVE THE 22 LINE THERE!
    • sudo iptables-restore < /etc/iptables/rules.v4

From here I'm repeating some stuff from the official install docs here so feel free to check if there's updated instructions.

  1. Stay connected to your virtual machine in oracle and run the following commands to get owncast up and running :
  2. curl -s | bash
  3. curl | sudo tee /etc/systemd/system/owncast.service
  4. sudo systemctl daemon-reload
  5. sudo systemctl enable owncast
  6. sudo systemctl start owncast
  7. Stay connected and now it's the following commands from (Caddy's install docs)[]
  8. sudo apt install -y apt-transport-https
  9. curl -1sLf '' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
  10. curl -1sLf '' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
  11. sudo apt update
  12. sudo apt install caddy
  13. Create a Caddyfile for owncast :
  14. curl | sudo tee /etc/caddy/Caddyfile – Switch the hostname for yours ( : sudo sed -i /etc/caddy/Caddyfile s/streams\.martyn\.berlin/ (again, you are welcome to use nano, vim etc. if you prefer!)
  15. sudo systemctl daemon-reload
  16. sudo systemctl enable caddy`
  17. sudo systemctl stop caddy
  18. sudo systemctl start caddy

Viola! all done! You can proceed to login with user “admin” and password “abc123” (PLEASE CHANGE THIS as soon as you do!). Your stream key is the admin password.

Change the admin password (stream key), add extra resolutions, edit your home page, all the usual things, and you're good to go.

Final thoughts

This is a free way of getting your own owncast setup and will get you a working system, that should do you fine. What it doesn't do is updates, backups and helping you fix it if things go wrong. That might be fine for you, it might not. I'm just enabling you to give it a go.

I might terraform this at some point, and I long for a day where there's a good free-tier kubernetes where a lot of the above is nicely abstracted away, but here we are.

† Contents of that file in case it goes missing :

Description=Owncast Service



‡ The caddyfile contents : {
encode gzip

I'm gonna be doing a stream on Owncast soon and probably co-streaming to twitch where I already have some connections. That means I want both chats on screen, so twitch viewers can see the owncast chat and vice versa. So obviously I search the interwebs with whoogle “Chat Owncast OBS” and all the answers seem pretty complex. Example link that is well-written

Whilst I'm capable of setting up those things, I found it hard to believe that this was the best use of my time. So I searched a bit harder and found the right keywords to use – “embed” is the magic term. It's not perfect so I added some custom CSS and thought I'd make it easier for people to search for and document those changes I made.

So, to add your chat as an “overlay” in OBS you can simply add a Browser type source with the URL – In my case, I use a local IP here via http rather than the external https url, that saves having to sort out hairpin mode on my router.

Here is the custom CSS I use for making the chat a bit more compact :

.message { margin-top: 0px;
margin-right: 0px;
margin-bottom: 2px;
margin-left: 0px;
padding-top: 1px;
padding-right: 1px;
padding-bottom: 1px;
padding-left: 5px; }
.message-author {
display: inline;
.message-author:after {
content: ": "
padding: 0;
display: inline;

Here is what it looks like before adding the custom css: Before

and after : After

I'm still not “happy” with the appearance, but perhaps some cool frontend person might be able to make it look better than I can. Of course I could just go the difficult way, but that seems excessive for now.

(Restored: Original date: May 2, 2022)

There's a saying that goes along the lines of “Some people collect things, like stamps or fridge magnets, and that's their hobby, but geeks collect hobbies”. That's pretty true of me, here's a bit of a non-exhaustive list :

  • Boardgaming
  • Computer Gaming
  • Programming & Infrastructure (that's my dayjob – SRE/Devops/Platform Engineering/latest buzzword...)
  • Electronics
  • 3D Printing / Laser Cutting / CNC
  • Home metal casting (bizmuth)
  • Airbrush painting stuff I've printed
  • Home automation
  • Singing and Songwriting

To that last one, lately I've decided I want to put some of my songs “out there” on music platforms so I can say “oh yeah, look me up on spotify, apple music etc. if you want to hear my stuff”. That requires having a bit of a bio, which is what this post will serve as the basis of.

So, to paint a picture with words artist Martyn looks like this :

Always been surrounded by music my whole life, Dad is a Bass player and producer, Mum sings and writes lyrics. I'm named after “John Martyn” and my middle name means Harmony (thanks mum!) so early on, I had the right stuff in alignment. I played Cornet in a brass band and also a “Big band” in my teens, picked up and never got great at saxaphone in my 20s, don't have the callouses or muscle memory for guitar and can kinda “plinky-plonk” on a keyboard to make melodies or chord progressions in my daw.

Some people like to see a list of influences, and who am I to blow against the wind*? It's a very long list so I'll just list a few here : Paul Simon, Peter Gabriel, Phil Collins, The Police, Pet Shop Boys, Propaganda, Pulp, Placebo... Okay, it's not just Ps, that's just a start, let's add some more : Alan Menken, Blue Oyster Cult, The Carpenters, David Bowie, Elvis Costello, Fleetwood Mac, George Michael, Heart, Imagine Dragons, Janis Joplin, Kate Bush, Limp Bizkit, The Mamas & Papas, Nightwish, Orchestral Manoeuvres in the Dark, Paramore, Queen, REM, Sting, Trent Reznor, Ultravox, Vivaldi, Wagner, XTC, Yazoo, ZZ Top and MANY more. *If you know the song, you know.

Like my parents, music isn't my dayjob, it's a passion, not a grind for me. My muse strikes and a song is in my head, and has to “get out”. Sometimes it's a lyric, sometimes it's a beat, sometimes it's a funky bass line.

In 2021 I attempted “Songtober” which was to write and record a song a day, from a set of prompts. I didn't manage 30 songs (31st was a day off!), but I did manage 25, of varying quality.

I have two “Personas” for music :

“LycanSong” – that's me, my DAW and so many plugins (instruments), and sometimes now my producer, his DAW and many more plugins! This is the name I compose music under, and in general I don't stick to a Genre. If you had to pin me down it would probably be Folk/Pop/Rock but that doesn't cover it all and is already three!

“AcapellaWolf” – that's me, my DAW as effectively a loopstation, and no instruments. Think Pentatonics but only one person, or aspiring to be like Jared Halley, Peter Hollens, Smooth McGroove or Mike Tompkins. I don't “write” music under this persona, but I might cover LycanSong tracks this way. This is also my streaming persona when I do stream on twitch, and I also mod for (currently) one other Music streamer, and help out in the Twitch Music and associated discords.

Am I looking to be “discovered”, well, not really, I'm not after fame, and I'm fairly certain fortune is not on the cards for me with music either (should probably have “pursued” it earlier in life).

(Restored: Original date: January 9, 2022)

If you've been following my previous blog posts, you have now a dual-boot pinenote with debian, but you have a rather black-looking e-ink screen and only a terminal entry. You probably want a GUI, wifi, the touchscreen working and pen configured so you can interact with the device.

So let's start. First, wifi.

create a file with your wifi configuration (change “home” to the ssid of your wifi and “SuperSecretPassword” to the wifi password) :

cat > /etc/wpa_supplicant/wpa_supplicant.conf
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=wheel

(Press Control+D on a blank line to finish creating the file) Then run wpa_supplicant in the background :

wpa_supplicant -iwlan0 -c/etc/wpa_supplicant/wpa_supplicant.conf -B

After about a minute, you'll have an IP on wlan0 :

ip a show wlan0
2: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
     inet brd scope global noprefixroute wlan0 
         valid_lft forever preferred_lft forever
     inet6 dead::beef:1010:1010:0001/64 scope link       
          valid_lft forever preferred_lft forever

So we can use apt to install xwindows and a DE. For now, we'll use xfce as it's lightweight :

apt install -y xfce4 onboard xserver-xorg-input-evdev network-manager-gnome

That's a lot of packages. Eventually though, you'll be returned to the prompt.

Note: I included network-manager-gnome in the package list above as wicd no longer seems to be in debian and xfce's own network management tool is abandoned. You can decide not to include it but to get network you'd have to use wpa_supplicant manually. You can remove it from the list if you prefer to manage wifi a different way.

Now you have some more config files to create :

First, the xorg config for rotating the pen and enabling touch input :

cat > /etc/X11/xorg.conf.d/pinenote.conf
Section "InputClass"
        Identifier "evdev touchscreen"
        MatchProduct "tt21000"
        MatchIsTouchscreen "on"
        #MatchDevicePath "/dev/input/event5"
        Driver        "evdev"

Section "InputClass"
        Identifier    "RotateTouch"
        MatchProduct    "w9013"
        Option    "TransformationMatrix" "-1 0 1 0 -1 1 0 0 1"

Control-D again for the prompt on a blank line.

And to enable the greeter to switch to the onscreen keyboard (note TWO chevrons!):

echo "keyboard=onboard" >> /etc/lightdm/lightdm-gtk-greeter.conf

Now's also as good a time as any to create a normal user (you don't have to call it fred!) and enable them to sudo:

adduser fred

Put in the password twice, enter for the other details then when the prompt returns:

apt install -y sudo
addgroup fred sudo

Now, I'm not usually a fan of saying this, but the easiest thing to do here is reboot, as dbus, lightdm and all sorts of other things need to talk to each other and it would be fiddly to get them to do so. So issue reboot, Spam your Interrupt and use the last u-boot commands from the previous article. You should be greeted by the lightdm greeter.

To bring up the onscreen keyboard so you can log in, you can press F3 on the keyboard. I kid you not, that's the default. You can also enable it by clicking the “person in a bubble” icon in the top right though.

There you have it, a “minimal” XFCE desktop with working touch and pen.

I want to give a shout out here to everyone who has been involved with getting Linux on the pinenote up to this stage.

MASSIVE props to smauel who has been doing the kernel work that gets the panel and sound working! Thanks to everyone in the pine64 irc/discord/matrix pinenote channel for support too. Without them, I'd probably still be fighting stuff. Special thanks to irrenhaus, vveapon and pgwipeout there for their patience with me. Also a quick thanks to DorianRudolph who's docs form some very important basis for my own. And of course, pine64 themselves, this is such a cool device!

Debootstrap problems

(Restored: Original date: January 9, 2022)

So I lost a day, maybe two to this piece of software, and I'm gonna have a little side-rant about this.

I have a device (pinenote) running android, and I have a kernel to boot it into linux, and I want to boot debian, because it's my preferred distro.

So okay, we have termux on the device, but after hours of fighting it, debootstrap there fails in many many ways. One here was the final straw :

W: Failure trying to run: proot -w /home -b /dev -b /proc --link2symlink -0 -r /data/data/com.termux/files/home/target dpkg --force-overwrite --force-confold --skip-same-version --install /var/cache/apt/archives/libapparmor1_2.13.6-10_arm64.deb /var/cache/apt/archives/libargon2-1_0~20171227-0.2_arm64.deb /var/cache/apt/archives/libcryptsetup12_2%3a2.3.5-1_arm64.deb /var/cache/apt/archives/libip4tc2_1.8.7-1_arm64.deb /var/cache/apt/archives/libjson-c5_0.15-2_arm64.deb /var/cache/apt/archives/libkmod2_28-1_arm64.deb /var/cache/apt/archives/libcap2_1%3a2.44-1_arm64.deb /var/cache/apt/archives/dmsetup_2%3a1.02.175-2.1_arm64.deb /var/cache/apt/archives/libdevmapper1.02.1_2%3a1.02.175-2.1_arm64.deb /var/cache/apt/archives/systemd_247.3-6_arm64.deb /var/cache/apt/archives/systemd-timesyncd_247.3-6_arm64.deb
W: See /data/data/com.termux/files/home/target/debootstrap/debootstrap.log for details (possibly the package systemd is at fault)

BUT, dear reader, I have an alpine chroot handy (I used this to repartition the disk), so I try that.

NO DICE. No matter what I do, I get fun errors like Tried to extract package, but file already exists. Exit... which turns out to be a tar bug, maybe... because in the log you get the wonderful error:

Cannot change mode to rwxr-xr-x: No such file or directory

Another annoying situation was using the --foreign option, I often got a chrootable system that didn't contain perl. This meant that --second-stage just barfed and failed.

Anyway, I'm not writing this post because I have a solution, EXCEPT I gave in and built a Dockerfile that builds the filesystem from a debian base. Once I had that, I then made github automatically build me a tarball using github actions.

So yeah, it's really annoying that debootstrap is supposed to be “easy to use” and “available with almost no dependencies” and yet every way I went about it, I didn't get a working system. It may work well on a debian system, but on non-debian ones it's a nightmare to be honest.

What would my suggestion be? Follow the alpine strategy and provide downloadable tarballs instead. Like the ones I had to build myself on Github, by running debootstrap, but on a debian system.

Yes, I can simply run it myself on a debian box or using docker, but the reason I want to use a CI like github actions is because I'm documenting for others how to get debian on this device.

(Restored: original date: January 8, 2022)

IMPORTANT: This will trash your userdata. Back up your files first. You will get the full android back but without your data.

Pre-requisites: – Pinenote kernel, dtb and modulesRooted android – Termux and ssh – this method is REALLY the easiest to get up and running with. If you want, you could fight replacing busybox to include curl and get files over some other way, but you're taking the path less travelled. – Your data you need on your pinenote BACKED UP – I'm not kidding, it all goes.

Stage 1: prepare:

Copy onto your android device (I recommend using termux and scp) : – Image, rk3566-pinenote.dtb – the kernel and it's device tree.

Stage 2: repartitioning

The only ext4 filesystem that isn't going to be upset by us placing linux on it is the /cache partition and unfortunately, debian with a window manager, wifi and onscreen keyboard is about 1Gig, and so is that partition, so no headroom. So we need to repartition.

First, we'll use that /cache partition to house a the smallest distro with parted (alpine, what a surprise) so in termux let's grab the stuff we need :

curl > min.tar.gz
su -
mount -o remount,rw /cache
cp Image rk3566-pinenote.dtb /cache/
tar -zxf min.tar.gz -C /cache
echo "nameserver $(getprop net.dns1)" > /cache/etc/resolv.conf 
chroot /cache
apk add --no-cache parted e2fsprogs

Now, you need that USB dongle that came with your pinenote for this next step : – connect the USB dongle to your pc in-line to the pinenote – fire up a terminal emulator connected to the device at 1500000/8n1 – touch the touchscreen to see that you get output – reboot your pinenote and spam Ctrl+C in that terminal until you get


Then paste these u-boot commands :

load mmc 0:b ${kernel_addr_r} /Image
load mmc 0:b ${fdt_addr_r} /rk3566-pinenote.dtb
setenv bootargs ignore_loglevel root=/dev/mmcblk0p11 rw rootwait earlycon console=tty0 console=ttyS2,1500000n8 fw_devlink=off init=/bin/sh
booti ${kernel_addr_r} - ${fdt_addr_r}

You should get a prompt that looks similar to this :

/bin/sh: can't access tty; job control turned off
/ # ␛[6n

Now we have a VERY minimal linux running, so that's cool, but what we're here for is partitioning, so let's go!

export TERM=dumb

You should see something like this :

Model: Generic SD/MMC Storage Card (sd/mmc)
Disk /dev/mmcblk0: 124GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number  Start   End     Size    File system  Name      Flags
1      8389kB  12.6MB  4194kB               uboot
2      12.6MB  16.8MB  4194kB               trust
3      16.8MB  18.9MB  2097kB               waveform
4      18.9MB  23.1MB  4194kB               misc
5      23.1MB  27.3MB  4194kB               dtbo
6      27.3MB  28.3MB  1049kB               vbmeta
7      28.3MB  70.3MB  41.9MB               boot
8      70.3MB  74.4MB  4194kB               security
9      74.4MB  209MB   134MB                recovery
10      209MB   611MB   403MB                backup
11      611MB   1685MB  1074MB  ext4         cache
12      1685MB  1702MB  16.8MB  ext4         metadata
13      1702MB  4965MB  3263MB               super
14      4965MB  4982MB  16.8MB               logo
15      4982MB  5049MB  67.1MB  fat16        device
16      5049MB  124GB   119GB   f2fs         userdata

We're going to split that userdata partition – my suggestion (and what's documented here) is 8Gb for Android and the rest for Linux : (it looks weird here because serial connections to odd terminals are, well, weird)

(parted) resizepart 16 13049M
Warning: Shrinking a partition can cause data loss, are you sure you want to
Yes/No? yes
Error: Partition(s) 11 on /dev/mmcblk0 have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use.
As a result, the old partition(s) will remain in use.  You should reboot now before making further changes.
Ignore/Cancel? Ignore
(parted) mkpart primary ext4 13G 100%
: Partition(s) 11 on /dev/mmcblk0 have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use.
As a result, the old partition(s) will remain in use.  You should reboot now before making further changes.
Ignore/Cancel? Ignore
(parted) quit
Information: You may need to update /etc/fstab.
/ #
/ # ␛[6n exit

Here's just the commands for copy-pasta-funtimes:

resizepart 16 13049M
mkpart primary ext4 13G 100%

At this point your pinenote will attempt to reboot, but we really upset the android system (you did read the warning, right?). So now we need to get back to the u-boot prompt again (spam that ctrl-c whilst rebooting the device).

Once you got there :

=> fastboot usb 0
Enter fastboot...OK 

and remove the dongle between your usb-cable and the pinenote and plug in the cable directly. Now we use the android fastboot command to wipe the userdata part in a way it will like, and boot into recovery :

fastboot erase userdata
erasing 'userdata'...
OKAY [  4.026s]
finished. total time: 4.027s
fastboot reboot

The pinenote will switch to “Charging” mode, so power it on and watch the progress-bar which will cycle MANY times at this point (it's creating a fs on userdata for us and possibly other files) but eventually the backlight will come on and then android will boot.

You probably want to reboot into fastboot mode again to re-flash your magisk'd boot partition (no need to start from a clean image though, you can simply flash the magisk'd version again from fastboot). Follow Step 1 from the other blog post then on your computer :

adb reboot bootloader
fastboot flash boot magisk_patched-23016_oYeer.img

Congratulations, android resized, and once you've launched the magisk app, back to rooted. Now, again, install termux and ssh and come back for stage 3. You're safe to restore your files and use android again, we're not going to mess with it from hereon in.

Stage 3: (re)prep for Linux

copy everything back to android : – Image, rk3566-pinenote.dtb – the kernel and it's device tree.

– modules – the ones from your kernel build.

You can grab these from my github if you like (not random binary, but built from source, see the CI) with

curl -L >

Stage 4: minimal debian (finally!)

Here's a rant on why I suggest getting this from github

in termux:

su -
mkdir target
mkfs.ext4 /dev/block/mmcblk2p17
mount /dev/block/mmcblk2p17 target

Fetch a debian arm64 base image and extract it (jq for the messy github stuff, you could also just visit and copy the link for the withwifi tarball) :

pkg install jq
curl -L $(curl --silent "" | jq '.assets[] | select(.name == "debian-bullseye-arm64-withwifi.tar.bz2") | .browser_download_url') > fs.tar.bz2
sudo tar -jxf fs.tar.bz2 --same-owner -C target/

Let's also add the kernel, dtbs, modules and firmware files from android at this point:

sudo cp Image rk3566-pinenote.dtb target/
sudo cp -r modules/ target/lib/
sudo mkdir -p target/lib/firmware/brcm
sudo cp -r /vendor/etc/firmware/* target/lib/firmware/
sudo cp /vendor/etc/firmware/fw_bcm43455c0_ag_cy.bin target/lib/firmware/brcm/brcmfmac43455-sdio.bin
sudo cp /vendor/etc/firmware/nvram_ap6255_cy.txt target/lib/firmware/brcm/brcmfmac43455-sdio.txt
sudo cp target/lib/firmware/BCM4345C0.hcd target/lib/firmware/brcm/BCM4345C0.hcd

Now we need to chroot to set the root password. Note that the PATH and TMP env vars from android are not friendly, so we'll set those too before running any commands in the chroot. Also, sudo from tsu won't chroot, so we're back to su

su -
chroot /mnt/linux /bin/bash
export TMP=/tmp/
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

you can if you want, add more stuff at this point with apt but given we're going to have to “rescue boot” for the next stage, you may as well wait.

So, debian installed, but not really fully bootable yet, but we can now boot into a rescue shell for that!

Stage 5: boot rescue

Again comes the need for the dongle – so go ahead and plug that in and start your terminal session in one window.

You can reboot either by issuing reboot in a su shell or by using the android interface to reboot. Then spam that control-c to get your u-boot prompt and issue the commands to boot rescue shell :

load mmc 0:11 ${kernel_addr_r} /Image
load mmc 0:11 ${fdt_addr_r} /rk3566-pinenote.dtb
setenv bootargs ignore_loglevel root=/dev/mmcblk0p17 rw rootwait earlycon console=tty0 console=ttyS2,1500000n8 fw_devlink=off init=/bin/sh
booti ${kernel_addr_r} - ${fdt_addr_r}

Very similar to before, only this time we're on a different partition :–)

You'll want to wait about 30s for some more kernel messages to pop in, but then we'll grab the waveform partition and make it a file :

dd if=/dev/mmcblk0p3 of=/lib/firmware/waveform.bin bs=1k count=2048

Now you can start making the ramdisk that debian expects (TERM=dumb helps on difficult terminals like miniterm in windows) :

export TERM=dumb
mount -t proc none /proc
mount -t sysfs none /sys
depmod -a
mkinitrd /dracut-initrd.img $(uname -r)
mkimage -A arm -T ramdisk -C none -n uInitrd -d /dracut-initrd.img /uInitrd.img

Ready to boot into full debian yet?! YES! When you type exit again, the pinenote will reboot, so get ready to spam the control-c again and at the INTERRUPT prompt you need :

load mmc 0:11 ${kernel_addr_r} /Image
load mmc 0:11 ${fdt_addr_r} /rk3566-pinenote.dtb
load mmc 0:11 ${ramdisk_addr_r} /uInitrd.img
setenv bootargs ignore_loglevel root=/dev/mmcblk0p17 rw rootwait earlycon console=tty0 console=ttyS2,1500000n8 fw_devlink=off init=/sbin/init
booti ${kernel_addr_r} ${ramdisk_addr_r} ${fdt_addr_r}

Note that we're now loading a ramdisk. You'll also get a terminal on the screen, so you can use a usbc docking-station to use a keyboard, mouse etc.

This post is way too long at this point, so next post will be a shorter one on getting X up and running with the touchscreen, pen and onscreen keyboard.

(Restored: Original date: January 8, 2022)

This is really just an expansion of Dorian's notes but with more of a step-by-step for those who want all the info in one place.

Step 1: adb access

First, enable developer mode utils:

  • Click the applications icon in the top menu (four squares, one rotated)
  • Click “Application Management”
  • If you see a list of apps, continue, if not click “Application Message”
  • If Settings is there, click it, if not, the 3 dot menu top right and Show system, then search for Settings and click it.
  • Click “Open”
  • Click “About tablet”
  • Click “Build number” 8 times (the first few it won't say anything, after 8 it should tell you “You are now a developer”)
  • Click the back arrow top left
  • Click “System” –> Click “Advanced” –> Click Developer options
  • Scroll down a lot until you get to “Default USB configuration”, click it and select “PTP”
  • Click the back arrow top left
  • Scroll back up and find USB debugging, click it and click ok
  • Plug the tablet into your computer, and a dialog will appear asking if you wish to “Allow USB debugging”.
  • Click allow, but first you probably don't want this popping up every time, so check the box “Always allow from this computer”

Congrats, now you can use ADB to connect to the Pinenote. If you don't have it, I recommend using : – Windows – 15 second adb installer – Linux – apt install adb and probably you'll want fastboot so apt install fastboot – Mac – brew install android-platform-tools

Step 2: magisk root

You will need a copy of your boot partition to continue. Either follow Dorian's readme and get your own or grab the one from his archive.

Then we use ADB to push the boot.img and the known-good magisk APK to the device :

adb push boot.img /sdcard/boot.img
adb install magisk_c85b2a0.apk

Magisk should appear in the application menu (four squares, one rotated), if it doesn't, you can get to it the same way you did for settings. Open the application.

  • Click Install (to the right, under the cog)
  • Click Select and Patch a File
  • Click “Open” in the window that appears
  • Click “Phone”
  • Click “boot” (it has a ? icon)
  • Click “Let's go –>”
  • Note down the output file location

Back on your computer, pull the patched image :

adb pull /sdcard/Download/magisk_patched-23016_oYeer.img

Note: use the file location outputted in the previous step, because yours won't be called magisk_patched-23016_oYeer.img.

Now we need to get into either fastboot or rockusb mode on the tablet and flash the boot image. The easy way is to run adb reboot bootloader which puts the tablet into fastboot mode.

The e-ink screen will show the pine logo and no progress bar when this mode is enabled, and fastboot devices will show a device starting PNOTE

So push the patched image (remember to change the name):

fastboot flash boot magisk_patched-23016_oYeer.img

Once this has completed hold power for a while to turn off the device and then power it on again.

Reopen the Magisk app and after a while it should show a status of “installed”. You can check your root access with adb shell and running su which should pop up an allow/deny box.

Congratulations, your pinenote is rooted!

Bonus Content : ssh!

I suggest for ease of use (and lack of sucky terminal for adb) installing f-droid via the browser (google f-droid) and using f-droid to install termux. Then, you can ssh into the pinenote from a nice terminal.

  • Get f-droid from
  • upon downloading, android will ask you to allow unknown sources, go ahead and allow it in settings.
  • open it and wait (usually quite a while) for apps to appear in the front page. If they don't, tap “Updates” and do a “pull-down” to refresh and then click “latest” again.
  • search (bottom right) for termux. Sigh at the lack of appropriate sorting in the results, then select termux and click install.
  • Again, you'll need to allow f-droid to install “unknown sources”, so do that dance.
  • Open termux and wait until you get a prompt.
  • First, install openssh and set a password, something strongish like IncorrectPonyCapacitorPaperclip perhaps.

NOTE: at time of writing, due to a certificate issue, you may have issues installing packages from termux. If you do, the solution is to run termux-change-repo and select any repo other than

To do this, run the following commands :

pkg install termux-auth openssh

Then, every boot (annoying I know) you just run ssh in termux and you can ssh to the device on port 8022. The username is somewhat random at install-time (thanks android) and can be retrieved with whoami. To get your ip, you can type ip a show wlan0 as you would under linux.

So a complete command to ssh in might look like :

ssh -p8022 u0_all8@

and to scp files

scp -P8022 some_file u0_all8@

(Restored: original date: January 8, 2022)

This blog post is mostly so I can link to it from my other post, dual-booting debian on the pinenote, but might prove useful for other purposes.

All of the commands to get smauel's kernel and build it are available as a Dockerfile here too.

Note: it has been suggested to use the official arm toolchain rather than the gnu one because if you later use the compiler on the device, incompatibilities can happen, I may write an update to this post later, but you have been warned!

There's also a release on my github repo using github actions if you just want to download a zip with the completed files. The usual warnings about downloading random binary files from the internet apply but at least you can look at the ci config there too.

Prerequisites/assumptions : * A debian-based linux distro on an amd64 or other powerful machine (kernel compilation is a heavy process) * AT LEAST 6Gb of disk space free on the build machine.

First, check you have deb-src lines in /etc/apt/sources.list for the magic “get me almost all the dependencies to build a kernel please, debian”. If you don't have that, you have a lot more deps to install.

If your sources.list doesn't contain deb-src lines, here's a handy script to add them :

cat /etc/apt/sources.list | sed s/^deb/deb-src/ | sudo tee /etc/apt/sources.list.d/sources.src.list

then run apt update to get them updated.

I like to run the commands in ~/.local/src/ as I'm a bit old-school and the kernel sources are usually in /usr/src/linux but to run as non-root you usually locate /usr/ to ~/.local/. You can run anywhere, just change there, but if you're happy with that, run these commands :

mkdir -p ~/.local/src cd ~/.local/src

First, get the dependencies for building and don't ask me why build-dep linux-base doesn't include flex, bison or bc!

apt build-dep -y linux-base apt install -y gcc-aarch64-linux-gnu binutils-aarch64-linux-gnu git flex bison bc

Then fetch the kernel tree for the device you're using, in our case for the pinenote we're getting smauel's fine work :

git clone git checkout rk356x-ebc-dev

Okay, so here's another step you'll want to customize for another device, we build the defconfig into a valid .config for the pinenote:

make ARCH=arm64 CROSSCOMPILE=aarch64-linux-gnu- pinenotedefconfig

then we can go ahead and build the base kernel and modules:

make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- all

Now, so we have an obvious place to get to the files (and to avoid needing root, and saving you from polluting your amd64 device with arm64 files outside of your build tree), we make a pack directory and install the dtbs and modules there :

mkdir pack make ARCH=arm64 CROSSCOMPILE=aarch64-linux-gnu- INSTALLMODPATH=${PWD}/pack modulesinstall make ARCH=arm64 CROSSCOMPILE=aarch64-linux-gnu- INSTALLPATH=${PWD}/pack dtbs_install

That's it! All the modules you need are in pack/lib/modules/{kernel-version} so you'll want to copy those to /var/lib/modules/{kernel-version} on your pinenote's linux partition, your kernel image is in arch/arm64/boot/Image (this is the first file you give u-boot) and the dtb (the other file u-boot needs) is in pack/dtbs/{kernel-version}/rockchip/rk3566-pinenote.dtb {kernel-version} at the time of writing this document is 5.16.0-rc8-00002-g6c7fc21536f3 but just look in pack/lib/modules/ after you've run to be sure what your version is.