Martyn's random musings

(Restored: Original date: January 8, 2022)

This is really just an expansion of Dorian's notes but with more of a step-by-step for those who want all the info in one place.

Step 1: adb access

First, enable developer mode utils:

  • Click the applications icon in the top menu (four squares, one rotated)
  • Click “Application Management”
  • If you see a list of apps, continue, if not click “Application Message”
  • If Settings is there, click it, if not, the 3 dot menu top right and Show system, then search for Settings and click it.
  • Click “Open”
  • Click “About tablet”
  • Click “Build number” 8 times (the first few it won't say anything, after 8 it should tell you “You are now a developer”)
  • Click the back arrow top left
  • Click “System” –> Click “Advanced” –> Click Developer options
  • Scroll down a lot until you get to “Default USB configuration”, click it and select “PTP”
  • Click the back arrow top left
  • Scroll back up and find USB debugging, click it and click ok
  • Plug the tablet into your computer, and a dialog will appear asking if you wish to “Allow USB debugging”.
  • Click allow, but first you probably don't want this popping up every time, so check the box “Always allow from this computer”

Congrats, now you can use ADB to connect to the Pinenote. If you don't have it, I recommend using : – Windows – 15 second adb installer – Linux – apt install adb and probably you'll want fastboot so apt install fastboot – Mac – brew install android-platform-tools

Step 2: magisk root

You will need a copy of your boot partition to continue. Either follow Dorian's readme and get your own or grab the one from his archive.

Then we use ADB to push the boot.img and the known-good magisk APK to the device :

adb push boot.img /sdcard/boot.img
adb install magisk_c85b2a0.apk

Magisk should appear in the application menu (four squares, one rotated), if it doesn't, you can get to it the same way you did for settings. Open the application.

  • Click Install (to the right, under the cog)
  • Click Select and Patch a File
  • Click “Open” in the window that appears
  • Click “Phone”
  • Click “boot” (it has a ? icon)
  • Click “Let's go –>”
  • Note down the output file location

Back on your computer, pull the patched image :

adb pull /sdcard/Download/magisk_patched-23016_oYeer.img

Note: use the file location outputted in the previous step, because yours won't be called magisk_patched-23016_oYeer.img.

Now we need to get into either fastboot or rockusb mode on the tablet and flash the boot image. The easy way is to run adb reboot bootloader which puts the tablet into fastboot mode.

The e-ink screen will show the pine logo and no progress bar when this mode is enabled, and fastboot devices will show a device starting PNOTE

So push the patched image (remember to change the name):

fastboot flash boot magisk_patched-23016_oYeer.img

Once this has completed hold power for a while to turn off the device and then power it on again.

Reopen the Magisk app and after a while it should show a status of “installed”. You can check your root access with adb shell and running su which should pop up an allow/deny box.

Congratulations, your pinenote is rooted!

Bonus Content : ssh!

I suggest for ease of use (and lack of sucky terminal for adb) installing f-droid via the browser (google f-droid) and using f-droid to install termux. Then, you can ssh into the pinenote from a nice terminal.

  • Get f-droid from https://f-droid.org
  • upon downloading, android will ask you to allow unknown sources, go ahead and allow it in settings.
  • open it and wait (usually quite a while) for apps to appear in the front page. If they don't, tap “Updates” and do a “pull-down” to refresh and then click “latest” again.
  • search (bottom right) for termux. Sigh at the lack of appropriate sorting in the results, then select termux and click install.
  • Again, you'll need to allow f-droid to install “unknown sources”, so do that dance.
  • Open termux and wait until you get a prompt.
  • First, install openssh and set a password, something strongish like IncorrectPonyCapacitorPaperclip perhaps.

NOTE: at time of writing, due to a certificate issue, you may have issues installing packages from termux. If you do, the solution is to run termux-change-repo and select any repo other than grimler.se.

To do this, run the following commands :

pkg install termux-auth openssh
passwd

Then, every boot (annoying I know) you just run ssh in termux and you can ssh to the device on port 8022. The username is somewhat random at install-time (thanks android) and can be retrieved with whoami. To get your ip, you can type ip a show wlan0 as you would under linux.

So a complete command to ssh in might look like :

ssh -p8022 u0_all8@192.168.1.102

and to scp files

scp -P8022 some_file u0_all8@192.168.1.102:

(Restored: original date: January 8, 2022)

This blog post is mostly so I can link to it from my other post, dual-booting debian on the pinenote, but might prove useful for other purposes.

All of the commands to get smauel's kernel and build it are available as a Dockerfile here too.

Note: it has been suggested to use the official arm toolchain rather than the gnu one because if you later use the compiler on the device, incompatibilities can happen, I may write an update to this post later, but you have been warned!

There's also a release on my github repo using github actions if you just want to download a zip with the completed files. The usual warnings about downloading random binary files from the internet apply but at least you can look at the ci config there too.

Prerequisites/assumptions : * A debian-based linux distro on an amd64 or other powerful machine (kernel compilation is a heavy process) * AT LEAST 6Gb of disk space free on the build machine.

First, check you have deb-src lines in /etc/apt/sources.list for the magic “get me almost all the dependencies to build a kernel please, debian”. If you don't have that, you have a lot more deps to install.

If your sources.list doesn't contain deb-src lines, here's a handy script to add them :

cat /etc/apt/sources.list | sed s/^deb/deb-src/ | sudo tee /etc/apt/sources.list.d/sources.src.list

then run apt update to get them updated.

I like to run the commands in ~/.local/src/ as I'm a bit old-school and the kernel sources are usually in /usr/src/linux but to run as non-root you usually locate /usr/ to ~/.local/. You can run anywhere, just change there, but if you're happy with that, run these commands :

mkdir -p ~/.local/src cd ~/.local/src

First, get the dependencies for building and don't ask me why build-dep linux-base doesn't include flex, bison or bc!

apt build-dep -y linux-base apt install -y gcc-aarch64-linux-gnu binutils-aarch64-linux-gnu git flex bison bc

Then fetch the kernel tree for the device you're using, in our case for the pinenote we're getting smauel's fine work :

git clone https://github.com/smaeul/linux.git git checkout rk356x-ebc-dev

Okay, so here's another step you'll want to customize for another device, we build the defconfig into a valid .config for the pinenote:

make ARCH=arm64 CROSSCOMPILE=aarch64-linux-gnu- pinenotedefconfig

then we can go ahead and build the base kernel and modules:

make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- all

Now, so we have an obvious place to get to the files (and to avoid needing root, and saving you from polluting your amd64 device with arm64 files outside of your build tree), we make a pack directory and install the dtbs and modules there :

mkdir pack make ARCH=arm64 CROSSCOMPILE=aarch64-linux-gnu- INSTALLMODPATH=${PWD}/pack modulesinstall make ARCH=arm64 CROSSCOMPILE=aarch64-linux-gnu- INSTALLPATH=${PWD}/pack dtbs_install

That's it! All the modules you need are in pack/lib/modules/{kernel-version} so you'll want to copy those to /var/lib/modules/{kernel-version} on your pinenote's linux partition, your kernel image is in arch/arm64/boot/Image (this is the first file you give u-boot) and the dtb (the other file u-boot needs) is in pack/dtbs/{kernel-version}/rockchip/rk3566-pinenote.dtb {kernel-version} at the time of writing this document is 5.16.0-rc8-00002-g6c7fc21536f3 but just look in pack/lib/modules/ after you've run to be sure what your version is.

(Restored: original date: September 12, 2021)

Note: The clickbait title should make it obvious but this article is very much an opinion piece. Comments are enabled via fediverse (eg. mastodon) federation – respond to the post at @martyn@musings.martyn.berlin and tag in @martyn@toot.martyn.berlin.

Context: I am currently leading the SRE department of a scale-up phase company. 20 years ago I was a Multimedia Developer who built a server so we could share files and the ISDN line – I've been doing this a LONG time. I keep getting asked if we should use Spotify's Backstage IDP or Humanitec or similar. Here's a long-form version of why I say “no”.

Final Note: “Azure DevOps” is an evil name, polluting the already confusing nature of DevOps with their marketing to try and maintain relevance.

Point 1 – DevOps isn't just about automating

A common misunderstanding on “what is devops?” is “well it's just automating stuff”. Sorry to burst that bubble but Sysadmins were writing perl many many years ago and saying no to devs. That did not make them DevOps, nor does automating your CI to release to the VM you point-and-clicked to create.

Point 2 – DevOps is not just tooling

Again, M$ can sod off, but using Puppet, Chef, Ansible, even Terraform and Kubernetes (my top two goto tools) does not make what you're doing DevOps. If your dev team asks your SRE team to create a queue and your SRE team creates the queue using terraform, guess what, your SRE team is a SysAdmin (Ops) team.

A lot of people are forgetting why we as an industry made DevOps one thing – to collaborate. And let's follow with more whys (3 whys? 5 whys?) – Why do we want to collaborate? To make things better, more stable (ops) and faster (dev). To reduce friction. For devs to understand how our application is running in production so they can debug it.

Point 3 – Developers who understand how their application runs in production are better developers

This is surprisingly not really written down much on the internet but is a core part of the DevOps philosophy that is being eroded by IDPs. A cookie-cutter approach to development and deployment is the first part. I'm all for standardisation and speed (remember that from the point about tooling), but not at the expense of understanding. If you're deploying in Kubernetes for example, and a new developer can write a new service* and get production traffic to it without ever knowing what a Kubernetes deployment or service is, you're doing it wrong! How can that developer ever support their application in production? Sure, they might have DataDog and alerts set up for them, but what happens when an alert goes off? They need their SRE to come and help them, possibly at 3am. So why have devs on call, why not just have a sysadmin team again?

Point 4 – you might already have an IDP!

Have your SRE teams made a lot of nice CI for terraforming resources (queues, databases, etc.) and do you already have a CI for “infrastructure components” inside Kubernetes (an ingress controller, cert-manager, monitoring stack etc.)? Maybe you already have helm charts for your “micro”services and they have a good CI with testing environments and automated testing before promotion to production? Maybe even a nice rollback mechanism? Well done, that sounds a lot like an IDP!

Does that mean you should go further and make the developers not even have to see the helm charts? Or replace that entire system with an off-the-shelf IDP? IMO: no. That is how you make your developers NOT understand how their application runs in production. Code monkey like Fritos?**

Point 5 – but Martyn, the IDP provides a nice centralised place for self-documenting APIs

Sure, that’s something that is nice. It’s also perfectly possible to use OpenAPI (formally Swagger) and have that kind of documentation built and published in a central place without ripping out your entire infrastructure to do so! Add tags to your monitoring and you could even hotlink from a logline to your docs! Magic IDP from Wizards inc. isn’t going to replace ALL your documentation anyway, so you’re going to have documentation outside said IDP.

Point 6 – an IDP reduces developer onboarding, they can start coding straight away!

See my point 4 about running in production, but I’d actually refute this anyway. A new developer can either learn an abstraction layer that is industry standard (Kubernetes is this, don’t fight me) or an abstraction layer that is specific to their company. What are the chances that any new developer knows the IDP that your company picks vs them knowing a bit about the industry standard system?

I will concede that moving a developer from one team to another has less team-specific onboarding if they’re using an IDP because teams don’t organise themselves the same way unless you force them to. Perhaps a company-wide team documentation structure can help here, without replacing your entire infrastructure?

Point 7 – we're scaling, maybe we should have an IDP?

There's a big decision to be made here – do you want developers who understand how their application runs in production? “You build it, you run it” is a mantra for a good reason. If the environment where you want to scale your company is so strapped for good developers (that actually care how their application runs and performs in production) and you just want “any developers, please”, perhaps an off-the-shelf IDP is the way to go for your company. Of course, probably you want to hire a sysadmin team too because those developers cannot own their application in production if you want any kind of uptime guarantee.

Hopefully you can see, “this doesn't scale, we need a real IDP”, only works if you scale to have the knowledge and responsibility in a different place, and if you do that, beware, here be dragons. You might find the people who really believe in the devops ethos that you have been claiming is pervasive in the organisation, decide that they are no longer needed and go work somewhere where it is treasured. Coming soon

If you enjoyed this rant, look out for my next one : “Workflow engines promote bad practice”

*“Service” is a hugely overloaded term, here I'm talking an application that services users

**Quote from a Jonathon Coulton song called Code Monkey – I am not intending to deprecate people who code.

Karaokards twitch sings bot GA

(Restored: Original date: March 7, 2020)

So I've been messing with a lot of things and now I can finally make the little chatbot public!

So what does it do?

It gives you prompts on what to sing, for when you can't decide or when you can't think.

Example interaction :

@iMartynOnTwitch: !card @karaokards: Your prompt is : Chorus contains up, down or over

Yep, that's it, that's all it does. It has an admin panel with twitch login where you can get the bot to join or leave your channel, add extra prompts you come up with and change it's magic command.

It's opensource of course, hosted at my house https://git.martyn.berlin/martyn/karaokards and it has ci that pushes to dockerhub whenever I create a new tag (quite like this bit tbh)

The quality of the code is sub-par in my perfectionist eyes but it works.

If you want it in your channel, go to https://karaokards.martyn.berlin/ and use your twitch login to add it to your channel.

Oh, the name, yeah, there was this game with physical cards and I couldn't get in touch with the author (I tried) so it was his kinda idea and the bot is an homage to his work.

Some infrastructure work

(Restored: Original date: July 27, 2019)

So as the last post implied, I needed backups. Right now, ceph is fighting me, regularly and so I may need to recover my home cluster.

Well, I have a VPS from schnellno.de and there I also have k3s running (was kubeadm but k3s is so neat and great for the purposes I use, I've kinda standardised on it now).

Now I can use ceph in single-replica mode on that node, and create a CephObjectStore (not something I've played with before, but cool) to provide s3 compatible storage that isn't aws.

The benefits this brings are multiple :

I can use any backup system that targets s3 (e.g. velero) I can use cyberduck as an s3 client on my windows laptop to explore the “buckets” It's fronted by nginx-ingress with cert-manager, so the transfer is encrypted It has authentication as a standard so it should be secure

So that's what I did, and installed velero, so now I have backups of my cluster, and can recreate it quickly. Unfortunately paying for storage online for the ~8Tib of spinny disks I have is not sensible, so I need to be a bit more clever when it comes to backing up the data.

I found https://github.com/ianneub/mysqldump-to-s3 on dockerhub and it's alright but it doesn't actually allow you to target s3-compatible storage, only s3. It's also based on ubuntu so it's a heavy container. And it doesn't have automated builds. So I forked it.

https://github.com/iMartyn/mysqldump-to-s3 and https://hub.docker.com/r/imartyn/mysqldumptos3 exist and are approximately half the size of the previous incarnation (alpine based).

I now have daily backups of this blog's database, my nextcloud database and the kubernetes objects. That's a very good start and I'm feeling happy about this.

Yet another attempt at using a blog

(restored: original date: July 21, 2019) So I tried twice to set this up and twice my hardware killed the database before I'd set up backups.

Hopefully I'll get there this time!