Martyn's random musings

Do you want to try an Owncast stream but don't have a VPS and don't want to spend money on it? How about we set up one on Oracle's “Always-Free” Tier?

First question out of the way: Is Oracle's “Always-Free” Tier good enough to run Owncast. Simple answer: yes.

Longer answer you can skip if that satisfies you : We can either run their VM.Standard.E2.1.Micro class machine which has 1Gb ram and a 2Ghz vCPU (ish) and that should be enough or an ARM machine with up to 4 CPUs and 24 Gb or RAM. Given Owncast has an arm distribution, that's pretty incredible and works perfectly fine for Owncast.

Is this the best way to set up Owncast on Oracle? No! I'm doing this the easy way, and as an SRE person I would say “nooooo, don't do that, this is production, no clickops”, but I'm making this easy for streamers who are not so techie.

So, without further rambling, here's the prerequisites and the how-to.

Prerequisites: – A credit card that Oracle accepts. Seems to be Visa, Mastercard and Amex. Not sure if debit cards for those networks work, but I see no reason why not, they're mostly after ID verification here to stop spammers and bots creating accounts. – A domain name that you can control – that can be annoying, but you should be able to use a service like if you don't have your own domain name. You probably do want a real domain name as a streamer though, just saying...

So from the top:

  1. Sign up for an Oracle cloud free account – – The steps are really quite simple and other than the card payment which was less than a euro for me and refunded, painless. Be sure to choose the region closest to you as your home region because that will cut down latency to the Owncast box.
  2. Sign into your Oracle cloud account. Note that it's frustrating that they use two different usernames. The first is sent to you in the email, and you need to enter that to get to the real login page, then the second is your email address.
  3. Navigate to create an instance – Burger menu –> Compute –> Instances then click Compartment and choose the one marked (root) and click “Create Instance”
  4. Change the shape of the instance to meet our needs. I suggest using Ubuntu as they don't have Debian (my usual preference) and I recommend using half of your free tier allowance, allows you to use the other half if you need to try a new version out for instance.
    • Click “Edit” next to Image and shape
    • Click “Change Image” and select “Canonical Ubuntu” and click “Select Image”
    • Click “Change Shape” and select “Ampere” and “VM.Standard.A1.Flex”. Scroll down a little and change the “Number of OCPUs” to 2. The Memory should automatically change to 12, but if it doesn't set that as well. It's overkill but better than being too strict. Click “Select Shape”
    • Under “Add SSH keys” either click on the “Save private key” and “Save public key” links or if you know what an ssh key is and you already have one, click on upload or paste public keys.
    • Click “Create”
  5. Once your newly created machine has booted, you should then allow web traffic to it. This is in a couple of places (well done to Oracle for making this secure by default, but it makes for a longer document here!)
    • In the instance page click on the blue link next to “Virtual cloud network” under Instance details.
    • Scroll down to “Security Lists” on the left hand side
    • Click “Add Ingress Rules” and fill in the following details :
    • – Source Type : CIDR
    • – Source CIDR :
    • – IP Protocol : TCP
    • – Source Port Range : LEAVE BLANK
    • – Destination Port Range : 80
    • – Description : (Optional but my recommendation) HTTP
    • Click + Another Ingress Rule and fill in almost the same details
    • – Source Type : CIDR
    • – Source CIDR :
    • – IP Protocol : TCP
    • – Source Port Range : LEAVE BLANK
    • – Destination Port Range : 443 This is the difference from the previous rule
    • – Description : (Optional but my recommendation) HTTPS
    • Click + Another Ingress Rule and fill in almost the same details
    • – Source Type : CIDR
    • – Source CIDR :
    • – IP Protocol : TCP
    • – Source Port Range : LEAVE BLANK
    • – Destination Port Range : 1935 This is the difference from the previous rule
    • – Description : (Optional but my recommendation) RTMP
    • Click Add Ingress Rules
    • Go back to the Burger menu, click Networking and Virtual Cloud Networks.
    • Click the VCN in the table (blue text, should be something like vcn-20220909-2138)
    • Click on Network Security Groups, click Create Network Security Group and give it a Name (I chose allowown because it's, allowing owncast, I'm creative like that) and click Next.
    • Enter almost exactly all the above stuff over again. 2/3 of network access is then done...:
    • – Direction: Ingress
    • – Source Type : CIDR
    • – Source CIDR :
    • – IP Protocol : TCP
    • – Source Port Range : LEAVE BLANK
    • – Destination Port Range : 80
    • – Description : (Optional but my recommendation) HTTP
    • + Another rule
    • – Direction: Ingress
    • – Source CIDR :
    • – IP Protocol : TCP
    • – Source Port Range : LEAVE BLANK
    • – Destination Port Range : 443 This is the difference from the previous rule
    • – Description : (Optional but my recommendation) HTTPS
    • + Another rule
    • – Direction: Ingress
    • – Source CIDR :
    • – IP Protocol : TCP
    • – Source Port Range : LEAVE BLANK
    • – Destination Port Range : 1935 This is the difference from the previous rule
    • – Description : (Optional but my recommendation) RTMP
    • Create
    • Now we need to assign it to the instance NIC so Burger Menu –> Compute –> Instances
    • Click on the instance in the table
    • Network Security Groups: edit
    • + Another network security group
    • Select a value –> allowown –> Save changes
  6. Now is a good time to assign a DNS name to the IP of the machine. This is pretty much out of scope but make an A record to the Public IP address listed in the instance machine. I'll use for an example from here.
  7. Okay, so all the “infrastructure” is done now, we just need to connect to the machine by ssh and install Owncast and Caddy (this does the https stuff for us). First, connect via SSH to the Public IP of your machine using the ssh key you downloaded or used in creation. Again, fairly well documented elsewhere, feel free to use PuTTY or other client, but this is somewhat out of scope here.
  8. The third and final network allow is to edit the iptables firewall on the ubuntu virtual machine. This is annoyingly a text edit, here's a sed line that should work, but you basically need to duplicate the line that ends with 22 -j ACCEPT three times, and change on the new lines 22 to 80, 443 and 1935.
    • Copy pasta: sudo sed -i s/"\(^.*22 -j ACCEPT\)"/"\1\n-A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT\n-A INPUT -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT\n-A INPUT -p tcp -m state --state NEW -m tcp --dport 1935 -j ACCEPT"/ /etc/iptables/rules.v4
    • OR sudo nano /etc/iptables/rules.v4, duplicate the 22 line with 80 and 443 instead. LEAVE THE 22 LINE THERE!
    • sudo iptables-restore < /etc/iptables/rules.v4

From here I'm repeating some stuff from the official install docs here so feel free to check if there's updated instructions.

  1. Stay connected to your virtual machine in oracle and run the following commands to get owncast up and running :
  2. curl -s | bash
  3. curl | sudo tee /etc/systemd/system/owncast.service
  4. sudo systemctl daemon-reload
  5. sudo systemctl enable owncast
  6. sudo systemctl start owncast
  7. Stay connected and now it's the following commands from (Caddy's install docs)[]
  8. sudo apt install -y apt-transport-https
  9. curl -1sLf '' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
  10. curl -1sLf '' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
  11. sudo apt update
  12. sudo apt install caddy
  13. Create a Caddyfile for owncast :
  14. curl | sudo tee /etc/caddy/Caddyfile – Switch the hostname for yours ( : sudo sed -i /etc/caddy/Caddyfile s/streams\.martyn\.berlin/ (again, you are welcome to use nano, vim etc. if you prefer!)
  15. sudo systemctl daemon-reload
  16. sudo systemctl enable caddy`
  17. sudo systemctl stop caddy
  18. sudo systemctl start caddy

Viola! all done! You can proceed to login with user “admin” and password “abc123” (PLEASE CHANGE THIS as soon as you do!). Your stream key is the admin password.

Change the admin password (stream key), add extra resolutions, edit your home page, all the usual things, and you're good to go.

Final thoughts

This is a free way of getting your own owncast setup and will get you a working system, that should do you fine. What it doesn't do is updates, backups and helping you fix it if things go wrong. That might be fine for you, it might not. I'm just enabling you to give it a go.

I might terraform this at some point, and I long for a day where there's a good free-tier kubernetes where a lot of the above is nicely abstracted away, but here we are.

† Contents of that file in case it goes missing :

Description=Owncast Service



‡ The caddyfile contents : {
encode gzip

I'm gonna be doing a stream on Owncast soon and probably co-streaming to twitch where I already have some connections. That means I want both chats on screen, so twitch viewers can see the owncast chat and vice versa. So obviously I search the interwebs with whoogle “Chat Owncast OBS” and all the answers seem pretty complex. Example link that is well-written

Whilst I'm capable of setting up those things, I found it hard to believe that this was the best use of my time. So I searched a bit harder and found the right keywords to use – “embed” is the magic term. It's not perfect so I added some custom CSS and thought I'd make it easier for people to search for and document those changes I made.

So, to add your chat as an “overlay” in OBS you can simply add a Browser type source with the URL – In my case, I use a local IP here via http rather than the external https url, that saves having to sort out hairpin mode on my router.

Here is the custom CSS I use for making the chat a bit more compact :

.message { margin-top: 0px;
margin-right: 0px;
margin-bottom: 2px;
margin-left: 0px;
padding-top: 1px;
padding-right: 1px;
padding-bottom: 1px;
padding-left: 5px; }
.message-author {
display: inline;
.message-author:after {
content: ": "
padding: 0;
display: inline;

Here is what it looks like before adding the custom css: Before

and after : After

I'm still not “happy” with the appearance, but perhaps some cool frontend person might be able to make it look better than I can. Of course I could just go the difficult way, but that seems excessive for now.

(Restored: Original date: May 2, 2022)

There's a saying that goes along the lines of “Some people collect things, like stamps or fridge magnets, and that's their hobby, but geeks collect hobbies”. That's pretty true of me, here's a bit of a non-exhaustive list :

  • Boardgaming
  • Computer Gaming
  • Programming & Infrastructure (that's my dayjob – SRE/Devops/Platform Engineering/latest buzzword...)
  • Electronics
  • 3D Printing / Laser Cutting / CNC
  • Home metal casting (bizmuth)
  • Airbrush painting stuff I've printed
  • Home automation
  • Singing and Songwriting

To that last one, lately I've decided I want to put some of my songs “out there” on music platforms so I can say “oh yeah, look me up on spotify, apple music etc. if you want to hear my stuff”. That requires having a bit of a bio, which is what this post will serve as the basis of.

So, to paint a picture with words artist Martyn looks like this :

Always been surrounded by music my whole life, Dad is a Bass player and producer, Mum sings and writes lyrics. I'm named after “John Martyn” and my middle name means Harmony (thanks mum!) so early on, I had the right stuff in alignment. I played Cornet in a brass band and also a “Big band” in my teens, picked up and never got great at saxaphone in my 20s, don't have the callouses or muscle memory for guitar and can kinda “plinky-plonk” on a keyboard to make melodies or chord progressions in my daw.

Some people like to see a list of influences, and who am I to blow against the wind*? It's a very long list so I'll just list a few here : Paul Simon, Peter Gabriel, Phil Collins, The Police, Pet Shop Boys, Propaganda, Pulp, Placebo... Okay, it's not just Ps, that's just a start, let's add some more : Alan Menken, Blue Oyster Cult, The Carpenters, David Bowie, Elvis Costello, Fleetwood Mac, George Michael, Heart, Imagine Dragons, Janis Joplin, Kate Bush, Limp Bizkit, The Mamas & Papas, Nightwish, Orchestral Manoeuvres in the Dark, Paramore, Queen, REM, Sting, Trent Reznor, Ultravox, Vivaldi, Wagner, XTC, Yazoo, ZZ Top and MANY more. *If you know the song, you know.

Like my parents, music isn't my dayjob, it's a passion, not a grind for me. My muse strikes and a song is in my head, and has to “get out”. Sometimes it's a lyric, sometimes it's a beat, sometimes it's a funky bass line.

In 2021 I attempted “Songtober” which was to write and record a song a day, from a set of prompts. I didn't manage 30 songs (31st was a day off!), but I did manage 25, of varying quality.

I have two “Personas” for music :

“LycanSong” – that's me, my DAW and so many plugins (instruments), and sometimes now my producer, his DAW and many more plugins! This is the name I compose music under, and in general I don't stick to a Genre. If you had to pin me down it would probably be Folk/Pop/Rock but that doesn't cover it all and is already three!

“AcapellaWolf” – that's me, my DAW as effectively a loopstation, and no instruments. Think Pentatonics but only one person, or aspiring to be like Jared Halley, Peter Hollens, Smooth McGroove or Mike Tompkins. I don't “write” music under this persona, but I might cover LycanSong tracks this way. This is also my streaming persona when I do stream on twitch, and I also mod for (currently) one other Music streamer, and help out in the Twitch Music and associated discords.

Am I looking to be “discovered”, well, not really, I'm not after fame, and I'm fairly certain fortune is not on the cards for me with music either (should probably have “pursued” it earlier in life).

(Restored: Original date: January 9, 2022)

If you've been following my previous blog posts, you have now a dual-boot pinenote with debian, but you have a rather black-looking e-ink screen and only a terminal entry. You probably want a GUI, wifi, the touchscreen working and pen configured so you can interact with the device.

So let's start. First, wifi.

create a file with your wifi configuration (change “home” to the ssid of your wifi and “SuperSecretPassword” to the wifi password) :

cat > /etc/wpa_supplicant/wpa_supplicant.conf
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=wheel

(Press Control+D on a blank line to finish creating the file) Then run wpa_supplicant in the background :

wpa_supplicant -iwlan0 -c/etc/wpa_supplicant/wpa_supplicant.conf -B

After about a minute, you'll have an IP on wlan0 :

ip a show wlan0
2: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
     inet brd scope global noprefixroute wlan0 
         valid_lft forever preferred_lft forever
     inet6 dead::beef:1010:1010:0001/64 scope link       
          valid_lft forever preferred_lft forever

So we can use apt to install xwindows and a DE. For now, we'll use xfce as it's lightweight :

apt install -y xfce4 onboard xserver-xorg-input-evdev network-manager-gnome

That's a lot of packages. Eventually though, you'll be returned to the prompt.

Note: I included network-manager-gnome in the package list above as wicd no longer seems to be in debian and xfce's own network management tool is abandoned. You can decide not to include it but to get network you'd have to use wpa_supplicant manually. You can remove it from the list if you prefer to manage wifi a different way.

Now you have some more config files to create :

First, the xorg config for rotating the pen and enabling touch input :

cat > /etc/X11/xorg.conf.d/pinenote.conf
Section "InputClass"
        Identifier "evdev touchscreen"
        MatchProduct "tt21000"
        MatchIsTouchscreen "on"
        #MatchDevicePath "/dev/input/event5"
        Driver        "evdev"

Section "InputClass"
        Identifier    "RotateTouch"
        MatchProduct    "w9013"
        Option    "TransformationMatrix" "-1 0 1 0 -1 1 0 0 1"

Control-D again for the prompt on a blank line.

And to enable the greeter to switch to the onscreen keyboard (note TWO chevrons!):

echo "keyboard=onboard" >> /etc/lightdm/lightdm-gtk-greeter.conf

Now's also as good a time as any to create a normal user (you don't have to call it fred!) and enable them to sudo:

adduser fred

Put in the password twice, enter for the other details then when the prompt returns:

apt install -y sudo
addgroup fred sudo

Now, I'm not usually a fan of saying this, but the easiest thing to do here is reboot, as dbus, lightdm and all sorts of other things need to talk to each other and it would be fiddly to get them to do so. So issue reboot, Spam your Interrupt and use the last u-boot commands from the previous article. You should be greeted by the lightdm greeter.

To bring up the onscreen keyboard so you can log in, you can press F3 on the keyboard. I kid you not, that's the default. You can also enable it by clicking the “person in a bubble” icon in the top right though.

There you have it, a “minimal” XFCE desktop with working touch and pen.

I want to give a shout out here to everyone who has been involved with getting Linux on the pinenote up to this stage.

MASSIVE props to smauel who has been doing the kernel work that gets the panel and sound working! Thanks to everyone in the pine64 irc/discord/matrix pinenote channel for support too. Without them, I'd probably still be fighting stuff. Special thanks to irrenhaus, vveapon and pgwipeout there for their patience with me. Also a quick thanks to DorianRudolph who's docs form some very important basis for my own. And of course, pine64 themselves, this is such a cool device!

Debootstrap problems

(Restored: Original date: January 9, 2022)

So I lost a day, maybe two to this piece of software, and I'm gonna have a little side-rant about this.

I have a device (pinenote) running android, and I have a kernel to boot it into linux, and I want to boot debian, because it's my preferred distro.

So okay, we have termux on the device, but after hours of fighting it, debootstrap there fails in many many ways. One here was the final straw :

W: Failure trying to run: proot -w /home -b /dev -b /proc --link2symlink -0 -r /data/data/com.termux/files/home/target dpkg --force-overwrite --force-confold --skip-same-version --install /var/cache/apt/archives/libapparmor1_2.13.6-10_arm64.deb /var/cache/apt/archives/libargon2-1_0~20171227-0.2_arm64.deb /var/cache/apt/archives/libcryptsetup12_2%3a2.3.5-1_arm64.deb /var/cache/apt/archives/libip4tc2_1.8.7-1_arm64.deb /var/cache/apt/archives/libjson-c5_0.15-2_arm64.deb /var/cache/apt/archives/libkmod2_28-1_arm64.deb /var/cache/apt/archives/libcap2_1%3a2.44-1_arm64.deb /var/cache/apt/archives/dmsetup_2%3a1.02.175-2.1_arm64.deb /var/cache/apt/archives/libdevmapper1.02.1_2%3a1.02.175-2.1_arm64.deb /var/cache/apt/archives/systemd_247.3-6_arm64.deb /var/cache/apt/archives/systemd-timesyncd_247.3-6_arm64.deb
W: See /data/data/com.termux/files/home/target/debootstrap/debootstrap.log for details (possibly the package systemd is at fault)

BUT, dear reader, I have an alpine chroot handy (I used this to repartition the disk), so I try that.

NO DICE. No matter what I do, I get fun errors like Tried to extract package, but file already exists. Exit... which turns out to be a tar bug, maybe... because in the log you get the wonderful error:

Cannot change mode to rwxr-xr-x: No such file or directory

Another annoying situation was using the --foreign option, I often got a chrootable system that didn't contain perl. This meant that --second-stage just barfed and failed.

Anyway, I'm not writing this post because I have a solution, EXCEPT I gave in and built a Dockerfile that builds the filesystem from a debian base. Once I had that, I then made github automatically build me a tarball using github actions.

So yeah, it's really annoying that debootstrap is supposed to be “easy to use” and “available with almost no dependencies” and yet every way I went about it, I didn't get a working system. It may work well on a debian system, but on non-debian ones it's a nightmare to be honest.

What would my suggestion be? Follow the alpine strategy and provide downloadable tarballs instead. Like the ones I had to build myself on Github, by running debootstrap, but on a debian system.

Yes, I can simply run it myself on a debian box or using docker, but the reason I want to use a CI like github actions is because I'm documenting for others how to get debian on this device.

(Restored: original date: January 8, 2022)

IMPORTANT: This will trash your userdata. Back up your files first. You will get the full android back but without your data.

Pre-requisites: – Pinenote kernel, dtb and modulesRooted android – Termux and ssh – this method is REALLY the easiest to get up and running with. If you want, you could fight replacing busybox to include curl and get files over some other way, but you're taking the path less travelled. – Your data you need on your pinenote BACKED UP – I'm not kidding, it all goes.

Stage 1: prepare:

Copy onto your android device (I recommend using termux and scp) : – Image, rk3566-pinenote.dtb – the kernel and it's device tree.

Stage 2: repartitioning

The only ext4 filesystem that isn't going to be upset by us placing linux on it is the /cache partition and unfortunately, debian with a window manager, wifi and onscreen keyboard is about 1Gig, and so is that partition, so no headroom. So we need to repartition.

First, we'll use that /cache partition to house a the smallest distro with parted (alpine, what a surprise) so in termux let's grab the stuff we need :

curl > min.tar.gz
su -
mount -o remount,rw /cache
cp Image rk3566-pinenote.dtb /cache/
tar -zxf min.tar.gz -C /cache
echo "nameserver $(getprop net.dns1)" > /cache/etc/resolv.conf 
chroot /cache
apk add --no-cache parted e2fsprogs

Now, you need that USB dongle that came with your pinenote for this next step : – connect the USB dongle to your pc in-line to the pinenote – fire up a terminal emulator connected to the device at 1500000/8n1 – touch the touchscreen to see that you get output – reboot your pinenote and spam Ctrl+C in that terminal until you get


Then paste these u-boot commands :

load mmc 0:b ${kernel_addr_r} /Image
load mmc 0:b ${fdt_addr_r} /rk3566-pinenote.dtb
setenv bootargs ignore_loglevel root=/dev/mmcblk0p11 rw rootwait earlycon console=tty0 console=ttyS2,1500000n8 fw_devlink=off init=/bin/sh
booti ${kernel_addr_r} - ${fdt_addr_r}

You should get a prompt that looks similar to this :

/bin/sh: can't access tty; job control turned off
/ # ␛[6n

Now we have a VERY minimal linux running, so that's cool, but what we're here for is partitioning, so let's go!

export TERM=dumb

You should see something like this :

Model: Generic SD/MMC Storage Card (sd/mmc)
Disk /dev/mmcblk0: 124GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number  Start   End     Size    File system  Name      Flags
1      8389kB  12.6MB  4194kB               uboot
2      12.6MB  16.8MB  4194kB               trust
3      16.8MB  18.9MB  2097kB               waveform
4      18.9MB  23.1MB  4194kB               misc
5      23.1MB  27.3MB  4194kB               dtbo
6      27.3MB  28.3MB  1049kB               vbmeta
7      28.3MB  70.3MB  41.9MB               boot
8      70.3MB  74.4MB  4194kB               security
9      74.4MB  209MB   134MB                recovery
10      209MB   611MB   403MB                backup
11      611MB   1685MB  1074MB  ext4         cache
12      1685MB  1702MB  16.8MB  ext4         metadata
13      1702MB  4965MB  3263MB               super
14      4965MB  4982MB  16.8MB               logo
15      4982MB  5049MB  67.1MB  fat16        device
16      5049MB  124GB   119GB   f2fs         userdata

We're going to split that userdata partition – my suggestion (and what's documented here) is 8Gb for Android and the rest for Linux : (it looks weird here because serial connections to odd terminals are, well, weird)

(parted) resizepart 16 13049M
Warning: Shrinking a partition can cause data loss, are you sure you want to
Yes/No? yes
Error: Partition(s) 11 on /dev/mmcblk0 have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use.
As a result, the old partition(s) will remain in use.  You should reboot now before making further changes.
Ignore/Cancel? Ignore
(parted) mkpart primary ext4 13G 100%
: Partition(s) 11 on /dev/mmcblk0 have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use.
As a result, the old partition(s) will remain in use.  You should reboot now before making further changes.
Ignore/Cancel? Ignore
(parted) quit
Information: You may need to update /etc/fstab.
/ #
/ # ␛[6n exit

Here's just the commands for copy-pasta-funtimes:

resizepart 16 13049M
mkpart primary ext4 13G 100%

At this point your pinenote will attempt to reboot, but we really upset the android system (you did read the warning, right?). So now we need to get back to the u-boot prompt again (spam that ctrl-c whilst rebooting the device).

Once you got there :

=> fastboot usb 0
Enter fastboot...OK 

and remove the dongle between your usb-cable and the pinenote and plug in the cable directly. Now we use the android fastboot command to wipe the userdata part in a way it will like, and boot into recovery :

fastboot erase userdata
erasing 'userdata'...
OKAY [  4.026s]
finished. total time: 4.027s
fastboot reboot

The pinenote will switch to “Charging” mode, so power it on and watch the progress-bar which will cycle MANY times at this point (it's creating a fs on userdata for us and possibly other files) but eventually the backlight will come on and then android will boot.

You probably want to reboot into fastboot mode again to re-flash your magisk'd boot partition (no need to start from a clean image though, you can simply flash the magisk'd version again from fastboot). Follow Step 1 from the other blog post then on your computer :

adb reboot bootloader
fastboot flash boot magisk_patched-23016_oYeer.img

Congratulations, android resized, and once you've launched the magisk app, back to rooted. Now, again, install termux and ssh and come back for stage 3. You're safe to restore your files and use android again, we're not going to mess with it from hereon in.

Stage 3: (re)prep for Linux

copy everything back to android : – Image, rk3566-pinenote.dtb – the kernel and it's device tree.

– modules – the ones from your kernel build.

You can grab these from my github if you like (not random binary, but built from source, see the CI) with

curl -L >

Stage 4: minimal debian (finally!)

Here's a rant on why I suggest getting this from github

in termux:

su -
mkdir target
mkfs.ext4 /dev/block/mmcblk2p17
mount /dev/block/mmcblk2p17 target

Fetch a debian arm64 base image and extract it (jq for the messy github stuff, you could also just visit and copy the link for the withwifi tarball) :

pkg install jq
curl -L $(curl --silent "" | jq '.assets[] | select(.name == "debian-bullseye-arm64-withwifi.tar.bz2") | .browser_download_url') > fs.tar.bz2
sudo tar -jxf fs.tar.bz2 --same-owner -C target/

Let's also add the kernel, dtbs, modules and firmware files from android at this point:

sudo cp Image rk3566-pinenote.dtb target/
sudo cp -r modules/ target/lib/
sudo mkdir -p target/lib/firmware/brcm
sudo cp -r /vendor/etc/firmware/* target/lib/firmware/
sudo cp /vendor/etc/firmware/fw_bcm43455c0_ag_cy.bin target/lib/firmware/brcm/brcmfmac43455-sdio.bin
sudo cp /vendor/etc/firmware/nvram_ap6255_cy.txt target/lib/firmware/brcm/brcmfmac43455-sdio.txt
sudo cp target/lib/firmware/BCM4345C0.hcd target/lib/firmware/brcm/BCM4345C0.hcd

Now we need to chroot to set the root password. Note that the PATH and TMP env vars from android are not friendly, so we'll set those too before running any commands in the chroot. Also, sudo from tsu won't chroot, so we're back to su

su -
chroot /mnt/linux /bin/bash
export TMP=/tmp/
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

you can if you want, add more stuff at this point with apt but given we're going to have to “rescue boot” for the next stage, you may as well wait.

So, debian installed, but not really fully bootable yet, but we can now boot into a rescue shell for that!

Stage 5: boot rescue

Again comes the need for the dongle – so go ahead and plug that in and start your terminal session in one window.

You can reboot either by issuing reboot in a su shell or by using the android interface to reboot. Then spam that control-c to get your u-boot prompt and issue the commands to boot rescue shell :

load mmc 0:11 ${kernel_addr_r} /Image
load mmc 0:11 ${fdt_addr_r} /rk3566-pinenote.dtb
setenv bootargs ignore_loglevel root=/dev/mmcblk0p17 rw rootwait earlycon console=tty0 console=ttyS2,1500000n8 fw_devlink=off init=/bin/sh
booti ${kernel_addr_r} - ${fdt_addr_r}

Very similar to before, only this time we're on a different partition :–)

You'll want to wait about 30s for some more kernel messages to pop in, but then we'll grab the waveform partition and make it a file :

dd if=/dev/mmcblk0p3 of=/lib/firmware/waveform.bin bs=1k count=2048

Now you can start making the ramdisk that debian expects (TERM=dumb helps on difficult terminals like miniterm in windows) :

export TERM=dumb
mount -t proc none /proc
mount -t sysfs none /sys
depmod -a
mkinitrd /dracut-initrd.img $(uname -r)
mkimage -A arm -T ramdisk -C none -n uInitrd -d /dracut-initrd.img /uInitrd.img

Ready to boot into full debian yet?! YES! When you type exit again, the pinenote will reboot, so get ready to spam the control-c again and at the INTERRUPT prompt you need :

load mmc 0:11 ${kernel_addr_r} /Image
load mmc 0:11 ${fdt_addr_r} /rk3566-pinenote.dtb
load mmc 0:11 ${ramdisk_addr_r} /uInitrd.img
setenv bootargs ignore_loglevel root=/dev/mmcblk0p17 rw rootwait earlycon console=tty0 console=ttyS2,1500000n8 fw_devlink=off init=/sbin/init
booti ${kernel_addr_r} ${ramdisk_addr_r} ${fdt_addr_r}

Note that we're now loading a ramdisk. You'll also get a terminal on the screen, so you can use a usbc docking-station to use a keyboard, mouse etc.

This post is way too long at this point, so next post will be a shorter one on getting X up and running with the touchscreen, pen and onscreen keyboard.

(Restored: Original date: January 8, 2022)

This is really just an expansion of Dorian's notes but with more of a step-by-step for those who want all the info in one place.

Step 1: adb access

First, enable developer mode utils:

  • Click the applications icon in the top menu (four squares, one rotated)
  • Click “Application Management”
  • If Settings is there, click it, if not, the 3 dot menu top right and Show system, then search for Settings and click it.
  • Click “Open”
  • Click “About tablet”
  • Click “Build number” 8 times (the first few it won't say anything, after 8 it should tell you “You are now a developer”)
  • Click the back arrow top left
  • Click “System” –> Click “Advanced” –> Click Developer options
  • Scroll down a lot until you get to “Default USB configuration”, click it and select “PTP”
  • Click the back arrow top left
  • Scroll back up and find USB debugging, click it and click ok
  • Plug the tablet into your computer, and a dialog will appear asking if you wish to “Allow USB debugging”.
  • Click allow, but first you probably don't want this popping up every time, so check the box “Always allow from this computer”

Congrats, now you can use ADB to connect to the Pinenote. If you don't have it, I recommend using : – Windows – 15 second adb installer – Linux – apt install adb and probably you'll want fastboot so apt install fastboot – Mac – brew install android-platform-tools

Step 2: magisk root

You will need a copy of your boot partition to continue. Either follow Dorian's readme and get your own or grab the one from his archive.

Then we use ADB to push the boot.img and the known-good magisk APK to the device :

adb push boot.img /sdcard/boot.img
adb install magisk_c85b2a0.apk

Magisk should appear in the application menu (four squares, one rotated), if it doesn't, you can get to it the same way you did for settings. Open the application.

  • Click Install (to the right, under the cog)
  • Click Select and Patch a File
  • Click “Open” in the window that appears
  • Click “Phone”
  • Click “boot” (it has a ? icon)
  • Click “Let's go –>”
  • Note down the output file location

Back on your computer, pull the patched image :

adb pull /sdcard/Download/magisk_patched-23016_oYeer.img

Note: use the file location outputted in the previous step, because yours won't be called magisk_patched-23016_oYeer.img.

Now we need to get into either fastboot or rockusb mode on the tablet and flash the boot image. The easy way is to run adb reboot bootloader which puts the tablet into fastboot mode.

The e-ink screen will show the pine logo and no progress bar when this mode is enabled, and fastboot devices will show a device starting PNOTE

So push the patched image (remember to change the name):

fastboot flash boot magisk_patched-23016_oYeer.img

Once this has completed hold power for a while to turn off the device and then power it on again.

Reopen the Magisk app and after a while it should show a status of “installed”. You can check your root access with adb shell and running su which should pop up an allow/deny box.

Congratulations, your pinenote is rooted!

Bonus Content : ssh!

I suggest for ease of use (and lack of sucky terminal for adb) installing f-droid via the browser (google f-droid) and using f-droid to install termux. Then, you can ssh into the pinenote from a nice terminal.

  • Get f-droid from
  • upon downloading, android will ask you to allow unknown sources, go ahead and allow it in settings.
  • open it and wait (usually quite a while) for apps to appear in the front page. If they don't, tap “Updates” and do a “pull-down” to refresh and then click “latest” again.
  • search (bottom right) for termux. Sigh at the lack of appropriate sorting in the results, then select termux and click install.
  • Again, you'll need to allow f-droid to install “unknown sources”, so do that dance.
  • Open termux and wait until you get a prompt.
  • First, install openssh and set a password, something strongish like IncorrectPonyCapacitorPaperclip perhaps.

NOTE: at time of writing, due to a certificate issue, you may have issues installing packages from termux. If you do, the solution is to run termux-change-repo and select any repo other than

To do this, run the following commands :

pkg install termux-auth openssh

Then, every boot (annoying I know) you just run ssh in termux and you can ssh to the device on port 8022. The username is somewhat random at install-time (thanks android) and can be retrieved with whoami. To get your ip, you can type ip a show wlan0 as you would under linux.

So a complete command to ssh in might look like :

ssh -p8022 u0_all8@

and to scp files

scp -P8022 some_file u0_all8@

(Restored: original date: January 8, 2022)

This blog post is mostly so I can link to it from my other post, dual-booting debian on the pinenote, but might prove useful for other purposes.

All of the commands to get smauel's kernel and build it are available as a Dockerfile here too.

Note: it has been suggested to use the official arm toolchain rather than the gnu one because if you later use the compiler on the device, incompatibilities can happen, I may write an update to this post later, but you have been warned!

There's also a release on my github repo using github actions if you just want to download a zip with the completed files. The usual warnings about downloading random binary files from the internet apply but at least you can look at the ci config there too.

Prerequisites/assumptions : * A debian-based linux distro on an amd64 or other powerful machine (kernel compilation is a heavy process) * AT LEAST 6Gb of disk space free on the build machine.

First, check you have deb-src lines in /etc/apt/sources.list for the magic “get me almost all the dependencies to build a kernel please, debian”. If you don't have that, you have a lot more deps to install.

If your sources.list doesn't contain deb-src lines, here's a handy script to add them :

cat /etc/apt/sources.list | sed s/^deb/deb-src/ | sudo tee /etc/apt/sources.list.d/sources.src.list

then run apt update to get them updated.

I like to run the commands in ~/.local/src/ as I'm a bit old-school and the kernel sources are usually in /usr/src/linux but to run as non-root you usually locate /usr/ to ~/.local/. You can run anywhere, just change there, but if you're happy with that, run these commands :

mkdir -p ~/.local/src cd ~/.local/src

First, get the dependencies for building and don't ask me why build-dep linux-base doesn't include flex, bison or bc!

apt build-dep -y linux-base apt install -y gcc-aarch64-linux-gnu binutils-aarch64-linux-gnu git flex bison bc

Then fetch the kernel tree for the device you're using, in our case for the pinenote we're getting smauel's fine work :

git clone git checkout rk356x-ebc-dev

Okay, so here's another step you'll want to customize for another device, we build the defconfig into a valid .config for the pinenote:

make ARCH=arm64 CROSSCOMPILE=aarch64-linux-gnu- pinenotedefconfig

then we can go ahead and build the base kernel and modules:

make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- all

Now, so we have an obvious place to get to the files (and to avoid needing root, and saving you from polluting your amd64 device with arm64 files outside of your build tree), we make a pack directory and install the dtbs and modules there :

mkdir pack make ARCH=arm64 CROSSCOMPILE=aarch64-linux-gnu- INSTALLMODPATH=${PWD}/pack modulesinstall make ARCH=arm64 CROSSCOMPILE=aarch64-linux-gnu- INSTALLPATH=${PWD}/pack dtbs_install

That's it! All the modules you need are in pack/lib/modules/{kernel-version} so you'll want to copy those to /var/lib/modules/{kernel-version} on your pinenote's linux partition, your kernel image is in arch/arm64/boot/Image (this is the first file you give u-boot) and the dtb (the other file u-boot needs) is in pack/dtbs/{kernel-version}/rockchip/rk3566-pinenote.dtb {kernel-version} at the time of writing this document is 5.16.0-rc8-00002-g6c7fc21536f3 but just look in pack/lib/modules/ after you've run to be sure what your version is.

(Restored: original date: September 12, 2021)

Note: The clickbait title should make it obvious but this article is very much an opinion piece. Comments are enabled via fediverse (eg. mastodon) federation – respond to the post at and tag in

Context: I am currently leading the SRE department of a scale-up phase company. 20 years ago I was a Multimedia Developer who built a server so we could share files and the ISDN line – I've been doing this a LONG time. I keep getting asked if we should use Spotify's Backstage IDP or Humanitec or similar. Here's a long-form version of why I say “no”.

Final Note: “Azure DevOps” is an evil name, polluting the already confusing nature of DevOps with their marketing to try and maintain relevance.

Point 1 – DevOps isn't just about automating

A common misunderstanding on “what is devops?” is “well it's just automating stuff”. Sorry to burst that bubble but Sysadmins were writing perl many many years ago and saying no to devs. That did not make them DevOps, nor does automating your CI to release to the VM you point-and-clicked to create.

Point 2 – DevOps is not just tooling

Again, M$ can sod off, but using Puppet, Chef, Ansible, even Terraform and Kubernetes (my top two goto tools) does not make what you're doing DevOps. If your dev team asks your SRE team to create a queue and your SRE team creates the queue using terraform, guess what, your SRE team is a SysAdmin (Ops) team.

A lot of people are forgetting why we as an industry made DevOps one thing – to collaborate. And let's follow with more whys (3 whys? 5 whys?) – Why do we want to collaborate? To make things better, more stable (ops) and faster (dev). To reduce friction. For devs to understand how our application is running in production so they can debug it.

Point 3 – Developers who understand how their application runs in production are better developers

This is surprisingly not really written down much on the internet but is a core part of the DevOps philosophy that is being eroded by IDPs. A cookie-cutter approach to development and deployment is the first part. I'm all for standardisation and speed (remember that from the point about tooling), but not at the expense of understanding. If you're deploying in Kubernetes for example, and a new developer can write a new service* and get production traffic to it without ever knowing what a Kubernetes deployment or service is, you're doing it wrong! How can that developer ever support their application in production? Sure, they might have DataDog and alerts set up for them, but what happens when an alert goes off? They need their SRE to come and help them, possibly at 3am. So why have devs on call, why not just have a sysadmin team again?

Point 4 – you might already have an IDP!

Have your SRE teams made a lot of nice CI for terraforming resources (queues, databases, etc.) and do you already have a CI for “infrastructure components” inside Kubernetes (an ingress controller, cert-manager, monitoring stack etc.)? Maybe you already have helm charts for your “micro”services and they have a good CI with testing environments and automated testing before promotion to production? Maybe even a nice rollback mechanism? Well done, that sounds a lot like an IDP!

Does that mean you should go further and make the developers not even have to see the helm charts? Or replace that entire system with an off-the-shelf IDP? IMO: no. That is how you make your developers NOT understand how their application runs in production. Code monkey like Fritos?**

Point 5 – but Martyn, the IDP provides a nice centralised place for self-documenting APIs

Sure, that’s something that is nice. It’s also perfectly possible to use OpenAPI (formally Swagger) and have that kind of documentation built and published in a central place without ripping out your entire infrastructure to do so! Add tags to your monitoring and you could even hotlink from a logline to your docs! Magic IDP from Wizards inc. isn’t going to replace ALL your documentation anyway, so you’re going to have documentation outside said IDP.

Point 6 – an IDP reduces developer onboarding, they can start coding straight away!

See my point 4 about running in production, but I’d actually refute this anyway. A new developer can either learn an abstraction layer that is industry standard (Kubernetes is this, don’t fight me) or an abstraction layer that is specific to their company. What are the chances that any new developer knows the IDP that your company picks vs them knowing a bit about the industry standard system?

I will concede that moving a developer from one team to another has less team-specific onboarding if they’re using an IDP because teams don’t organise themselves the same way unless you force them to. Perhaps a company-wide team documentation structure can help here, without replacing your entire infrastructure?

Point 7 – we're scaling, maybe we should have an IDP?

There's a big decision to be made here – do you want developers who understand how their application runs in production? “You build it, you run it” is a mantra for a good reason. If the environment where you want to scale your company is so strapped for good developers (that actually care how their application runs and performs in production) and you just want “any developers, please”, perhaps an off-the-shelf IDP is the way to go for your company. Of course, probably you want to hire a sysadmin team too because those developers cannot own their application in production if you want any kind of uptime guarantee.

Hopefully you can see, “this doesn't scale, we need a real IDP”, only works if you scale to have the knowledge and responsibility in a different place, and if you do that, beware, here be dragons. You might find the people who really believe in the devops ethos that you have been claiming is pervasive in the organisation, decide that they are no longer needed and go work somewhere where it is treasured. Coming soon

If you enjoyed this rant, look out for my next one : “Workflow engines promote bad practice”

*“Service” is a hugely overloaded term, here I'm talking an application that services users

**Quote from a Jonathon Coulton song called Code Monkey – I am not intending to deprecate people who code.

Karaokards twitch sings bot GA

(Restored: Original date: March 7, 2020)

So I've been messing with a lot of things and now I can finally make the little chatbot public!

So what does it do?

It gives you prompts on what to sing, for when you can't decide or when you can't think.

Example interaction :

@iMartynOnTwitch: !card @karaokards: Your prompt is : Chorus contains up, down or over

Yep, that's it, that's all it does. It has an admin panel with twitch login where you can get the bot to join or leave your channel, add extra prompts you come up with and change it's magic command.

It's opensource of course, hosted at my house and it has ci that pushes to dockerhub whenever I create a new tag (quite like this bit tbh)

The quality of the code is sub-par in my perfectionist eyes but it works.

If you want it in your channel, go to and use your twitch login to add it to your channel.

Oh, the name, yeah, there was this game with physical cards and I couldn't get in touch with the author (I tried) so it was his kinda idea and the bot is an homage to his work.