Christmas Movies

I watch a lot of films. Since “completing” the IMDB Top 250 back in 2016 I’ve kept an eye on it, and while I don’t go out of my way to watch the films that newly appear in it I generally sit at over 240 watched. I should note I don’t consider myself a film buff/critic, however. I watch things for enjoyment, and a lot of the time that’s kicking back and relaxing and disengaging my brain. So I don’t get into writing reviews, just high level lists of things I’ve watched, sometimes with a few comments.

With that in mind, let’s talk about Christmas movies. Yes, I appreciate it’s the end of January, but generally during December we watch things that have some sort of Christmas theme. New ones if we can find them, but also some of what we consider “classics”. This almost always starts with Scrooged after we’ve put up the tree. I don’t always like Bill Murray (I couldn’t watch The Life Aquatic with Steve Zissou and I think Lost in Translation is overrated), but he’s in a bunch of things I really like, and Scrooged is one of those.

I don’t care where you sit on whether Die Hard is a Christmas movie or not, it’s a good movie and therefore regularly gets a December watch. Die Hard 2 also fits into that category of “sequel at least as good as the original”, though Helen doesn’t agree. We watched it anyway, and I finally made the connection between the William Sadler hotel scene and Michael Rooker’s in Mallrats.

It turns out I’m a Richard Curtis fan. Love Actually has not aged well; most times I watch it I find something new questionable about it, and I always end up hating Alan Rickman for cheating on Emma Thompson, but I do like watching it. He had a new one, That Christmas, out this year, so we watched it as well.

Another new-to-us film this year was Spirited. I generally like Ryan Reynolds, and Will Ferrell is good as long as he’s not too overboard, so I had high hopes. I enjoyed it, but for some reason not as much as I’d expected, and I doubt it’s getting added to the regular watch list.

Larry doesn’t generally like watching full length films, but he (and we), enjoyed The Grinch, which I actually hadn’t seen before. He’s not as fussed on The Muppet Christmas Carol, but we watched it every year, generally on Christmas or Boxing Day. Favourite thing I saw on the Fediverse in December was “Do you know there’s a book of The Muppet Christmas Carol, and they don’t mention that there’s muppets in it once?”

There are a various other light hearted Christmas films we regularly watch. This year included The Holiday (I have so many issues with even just the practicalities of a short notice house swap), and Last Christmas (lots of George Michael music, what’s not to love? Also it was only on this watch through that we realised the lead character is the Mother of Dragons).

We started, but could not finish, Carry On. I saw it described somewhere as copaganda, and that feels accurate. It does not accurately reflect any of my interactions with TSA at airports, especially during busy periods.

Things we didn’t watch this year, but are regularly in the mix, include Fatman, Violent Night (looking forward to the sequel, hopefully this year), and Lethal Weapon. Klaus is kinda at the other end of the spectrum, but very touching, and we’ve watched it a couple of years now.

Given what we seem to like, any suggestions for other films to add? It’s nice to have enough in the mix that we get some variety every year.

Free Software Activities for 2024

I tailed off on blog posts towards the end of the year; I blame a bunch of travel (personal + business), catching the ‘flu, then December being its usual busy self. Anyway, to try and start off the year a bit better I thought I’d do my annual recap of my Free Software activities.

For previous years see 2019, 2020, 2021, 2022 + 2023.

Conferences

In 2024 I managed to make it to FOSDEM again. It’s a hectic conference, and I know there are legitimate concerns about it being a super spreader event, but it has the advantage of being relatively close and having a lot of different groups of people I want to talk to / see talk at it. I’m already booked to go this year as well.

I spoke at All Systems Go in Berlin about Using TPMs at scale for protecting keys. It was nice to actually be able to talk publicly about some of the work stuff my team and I have been working on. I’d a talk submission in for FOSDEM about our use of attestation and why it’s not necessarily the evil some folk claim, but there were a lot of good talks submitted and I wasn’t selected. Maybe I’ll find somewhere else suitable to do it.

BSides Belfast may or may not count - it’s a security conference, but there’s a lot of overlap with various bits of Free software, so I feel it deserves a mention.

I skipped DebConf for 2024 for a variety of reasons, but I’m expecting to make DebConf25 in Brest, France in July.

Debian

Most of my contributions to Free software continue to happen within Debian.

In 2023 I’d done a bunch of work on retrogaming with Kodi on Debian, so I made an effort to try and keep those bits more up to date, even if I’m not actually regularly using them at present. RetroArch got 1.18.0+dfsg-1 and 1.19.1+dfsg-1 uploads. libretro-core-info got associated 1.18.0-1 and 1.19.0-1 uploads too. I note 1.20.0 has been released recently, so I’ll have to find some time to build the appropriate DFSG tarball and update it.

rcheevos saw 11.2.0-1, 11.5.0-1 + 11.6.0-1 uploaded.

kodi-game-libretro itself had 20.2.7-1 uploaded, then 21.0.7-1. Latest upstream is 22.1.0, but that’s tracking Kodi 22 and we’re still on Kodi 21 so I plan to follow the Omega branch for now. Which I’ve just noticed had a 21.0.8 release this week.

Finally in the games space I uploaded mgba 0.10.3+dfsg-1 and 0.10.3+dfsg-2 for Ryan Tandy, before realising he was already a Debian Maintainer and granting him the appropriate ACL access so he can upload it himself; I’ve had zero concerns about any of his packaging.

The Debian Electronics Packaging Team continues to be home for a bunch of packages I care about. There was nothing big there, for me, in 2024, but a few bits of cleanup here and there.

I seem to have become one of the main uploaders for sdcc - I have some interest in the space, and the sigrok firmware requires it to build, so I at least like to ensure it’s in half decent state. I uploaded 4.4.0+dfsg-1, 4.4.0+dfsg-2, and, just in time to count for 2024, 4.4.0+dfsg-3.

The sdcc 4.4 upload lead to some compilation issues for sigrok-firmware-fx2laf so I uploaded 0.1.7-2 fixing that, then 0.1.7-3 doing some further cleanups.

OpenOCD had 0.12.0-2 uploaded to disable the libgpiod backend thanks to incompatible changes upstream. There were some in-discussion patches with OpenOCD upstream at the time, but they didn’t seem to be ready yet so I held off on pulling them in. 0.12.0-3 fixed builds with more recent versions of jimtcl. It looks like the next upstream release is about a year away, so Trixie will in all probability ship with 0.12.0 as well.

libjaylink had a new upstream release, so 0.4.0-1 was uploaded. libserialsport also had a new upstream release, leading to 0.1.2-1.

I finally cracked and uploaded sg3-utils 1.48-1 into experimental. I’m not the primary maintainer, but 1.46 is nearly 4 years old now and I wanted to get it updated in enough time to shake out any problems before we get to a Trixie freeze.

Outside of team owned packages, libcli had compilation issues with GCC 14, leading to 1.10.7-2. I also added a new package, sedutil 1.20.0-2 back in April; it looks fairly unmaintained upstream (there’s been some recent activity, but it doesn’t seem to be release quality), but there was an outstanding ITP and I’ve some familiarity with the space as we’ve been using it at work as part of investigating TCG OPAL encryption.

I continue to keep an eye on Debian New Members, even though I’m mostly inactive as an application manager - we generally seem to have enough available recently. Mostly my involvement is via Front Desk activities, helping out with queries to the team alias, and contributing to internal discussions.

Finally the 3 month rotation for Debian Keyring continues to operate smoothly. I dealt with 2023.03.24, 2023.06.24, 2023.09.22 + 2023.11.24.

Linux

I’d a single kernel contribution this year, to Clean up TPM space after command failure. That was based on some issues we saw at work. I’ve another fix in progress that I hope to submit in 2025, but it’s for an intermittent failure so confirming the fix is necessary + sufficient is taking a little while.

Personal projects

I didn’t end up doing much in the way of externally published personal project work in 2024.

Despite the release of OpenPGP v6 in RFC 9580 I did not manage to really work on onak. I started on the v6 support, but have not had sufficient time to complete anything worth pushing external yet.

listadmin3 got some minor updates based on external feedback / MRs. It’s nice to know it’s useful to other folk even in its basic state.

That wraps up 2024. I’ve got no particular goals for this year at present. Ideally I’d get v6 support into onak, and it would be nice to implement some of the wishlist items people have provided for listadmin3, but I’ll settle for making sure all my Debian packages are in reasonable state for Trixie.

The (lack of a) return-to-office conspiracy

During COVID companies suddenly found themselves able to offer remote working where it hadn’t previously been on offer. That’s changed over the past 2 or so years, with most places I’m aware of moving back from a fully remote situation to either some sort of hybrid, or even full time office attendance. For example last week Amazon announced a full return to office, having already pulled remote-hired workers in for 3 days a week.

I’ve seen a lot of folk stating they’ll never work in an office again, and that RTO is insanity. Despite being lucky enough to work fully remotely (for a role I’d been approached about before, but was never prepared to relocate for), I feel the objections from those who are pro-remote often fail to consider the nuances involved. So let’s talk about some of the reasons why companies might want to enforce some sort of RTO.

Real estate value

Let’s clear this one up first. It’s not about real estate value, for most companies. City planners and real estate investors might care, but even if your average company owned their building they’d close it in an instant all other things being equal. An unoccupied building costs a lot less to maintain. And plenty of companies rent and would save money even if there’s a substantial exit fee.

Occupancy levels

That said, once you have anyone in the building the equation changes. If you’re having to provide power, heating, internet, security/front desk staff etc, you want to make sure you’re getting your money’s worth. There’s no point heating a building that can seat 100 for only 10 people present. One option is to downsize the building, but that leads to not being able to assign everyone a desk, for example. No one I know likes hot desking. There are also scheduling problems about ensuring there are enough desks for everyone who might turn up on a certain day, and you’ve ruled out the option of company/office wide events.

Coexistence builds relationships

As a remote worker I wish it wasn’t true that most people find it easier to form relationships in person, but it is. Some of this can be worked on with specific “teambuilding” style events, rather than in office working, but I know plenty of folk who hate those as much as they hate the idea of being in the office. I am lucky in that I work with a bunch of folk who are terminally online, so it’s much easier to have those casual conversations even being remote, but I also accept I miss out on some things because I’m just not in the office regularly enough. You might not care about this (“I just need to put my head down and code, not talk to people”), but don’t discount it as a valid reason why companies might want their workers to be in the office. This often matters even more for folk at the start of their career, where having a bunch of experience folk around to help them learn and figure things out ends up working much better in person (my first job offered to let me go mostly remote when I moved to Norwich, but I said no as I knew I wasn’t ready for it yet).

Coexistence allows for unexpected interactions

People hate the phrase “water cooler chat”, and I get that, but it covers the idea of casual conversations that just won’t happen the same way when people are remote. I experienced this while running Black Cat; every time Simon and I met up in person we had a bunch of useful conversations even though we were on IRC together normally, and had a VoIP setup that meant we regularly talked too. Equally when I was at Nebulon there were conversations I overheard in the office where I was able to correct a misconception or provide extra context. Some of this can be replicated with the right online chat culture, but I’ve found many places end up with folk taking conversations to DMs, or they happen in “private” channels. It happens more naturally in an office environment.

It’s easier for bad managers to manage bad performers

Again, this falls into the category of things that shouldn’t be true, but are. Remote working has increased the ability for people who want to slack off to do so without being easily detected. Ideally what you want is that these folk, if they fail to perform, are then performance managed out of the organisation. That’s hard though, there are (rightly) a bunch of rights workers have (I’m writing from a UK perspective) around the procedure that needs to be followed. Managers need organisational support in this to make sure they get it right (and folk are given a chance to improve), which is often lacking.

Summary

Look, I get there are strong reasons why offering remote is a great thing from the company perspective, but what I’ve tried to outline here is that a return-to-office mandate can have some compelling reasons behind it too. Some of those might be things that wouldn’t exist in an ideal world, but unfortunately fixing them is a bigger issue than just changing where folk work from. Not acknowledging that just makes any reaction against office work seem ill-informed, to me.

Thoughts on Advent of Code + Rust

Diego wrote about his dislike for Advent of Code and that reminded me I hadn’t written up my experience from 2023. Mostly because, spoiler, I never actually completed it and always intended to do so and then write it up. I think it’s time to accept I’m not going to do that, and write down some thoughts before I forget all of them. These are somewhat vague, given the time that’s elapsed, but I think still relevant. You might also find Roger’s problem write up interesting.

I’ve tried AoC a couple of times before; I think I had a very brief attempt back in 2021, and I got 4 days in for 2022. For Advent of Code 2023 I tried much harder to actually complete the challenges, and got most of the way there. I didn’t allow myself to move on to the next day until fully completing the previous day, and didn’t end up doing the second half of December 24th, or any of December 25th.

Rust

First I want to talk about Rust, which is the language I chose to use for the problems. I’ve dabbled a little in it, but I’d like more familiarity with the basic language, and some programming problems seemed like a good way to get that. It’s a language I want to like; I’ve spent a lot of my career writing C, do more in Go these days, and generally think Rust promises a low level, run-time light environment like C but with the rough edges taken off.

I set myself the challenge of using just bare Rust; no external crates, no use of cargo. I was accused of playing on hard mode by doing this, but it really wasn’t the intention - I figured that I should be able to do what I needed without recourse to anything outside the core language, and didn’t want what seemed like the extra complexity of dealing with cargo.

That caused problems, however. I’m used to by-default generic error handling in Go through the error type, but Rust seems to have much more tightly typed errors. I was pointed at anyhow as the right way to do this in Rust. I still find this surprising; I ended up using unwrap() a lot when I think with more generic error handling I could have used ?.

The other thing I discovered is that by default rustc is heavy on the debug output. I got significantly better results on some of the solutions with rustc -O -C target-cpu=native source.rs. I probably shouldn’t be surprised by this, but worth noting.

Rust, to me, has a syntax only a C++ programmer could love. I am not a C++ programmer. Coming from C I found Go to be a nice, simple syntax to learn. Rust has not been the same. There’s a lot more punctuation, and it’s not always clear to me what it’s doing. This applies more when reading other people’s code than when writing it myself, obviously, but I see a lot of Rust code that could give Perl a run for its money in terms of looking like line noise.

The borrow checker didn’t bug me too much, but did add overhead to my thinking. The Rust compiler is generally very good at outputting helpful error messages when the programmer is an idiot. I ended up having to use a RefCell for one solution, and using .iter() for loops rather than explicit iterators (why, why is this different?). I also kept forgetting to explicitly mark variables as mutable when declaring them.

Things I liked? There’s a rich set of first class data types. Look, I’m a C programmer, I’m easily pleased. You give me some sort of hash array and I’ll be happy. Rust manages that, tuples, strings, all the standard bits any modern language can provide. The whole impl thing for adding methods to structures I like as a way of providing some abstraction, though I think Go has a nicer syntax for it. The compiler, as mentioned, is great at spitting out useful errors for the most part. Also although I wasn’t using external crates for AoC I do appreciate there’s a decent ecosystem there now (though that brings up another gripe: rust seems to still be a fairly fast moving target, to the extent I can no longer rely on the compiler in Debian stable to be able to compile random projects I find).

Advent of Code

Let’s talk about the advent of code bit now. Hopefully it’s long enough since it came out that this won’t be spoilers for anyone, but if you haven’t attempted the 2023 AoC and might, you might want to stop reading here.

First, a refresher on the format for those who might not be aware of it. Problems are posted daily from December 1st until the 25th. Each is in 2 parts; the second part is not viewable until you have provided the correct answer for the first part. There’s a whole leaderboard thing going on, but the puzzle opens at midnight UTC-5 so generally by the time I wake up and have time to look the problem has been solved many times over; no chance of getting listed.

Credit to AoC creator, Eric Wastl, for writing up the set of problems in an entertaining fashion. I quite enjoyed seeing how the puzzle would be phrased each day, and the whole thing obviously brings a lot of joy to folk I know.

I always start AoC thinking it’ll be a fun set of puzzles to solve. Then something happens and I miss a day or two, and all of a sudden I’ve a bunch of catching up to do and it’s all a bit more of a chore. I hit that at some points this time, but made a concerted effort to try and power through it.

That perseverance was required up front, because I found the second part of Day 1 to be ill specified, and had to iterate a few times to actually calculate the desired solution (IIRC, issues about whether sevenone at the end of a line ended up as 7 or 1 really tripped me up). I don’t recall any other problems that bit me as hard on the specification as this one, but it happening up front was unfortunate.

The short example input doesn’t always help with this either; either it’s not enough to be able to extrapolate patterns, or it doesn’t show all the variations you need to account for (that aren’t fully specified in the text), or in a few cases it turned out I needed to understand the shape of the actual data to produce a solution that could actually complete in a reasonable time.

Which brings me to another matter, sometimes brute force doesn’t actually work. This is fine, but the second part of the day’s problem can change the approach you’d take. So sometimes I got lucky in the way I handled the first half, and doing the second half was a simple 5 minute tweak, and sometimes I had to entirely change the way I was storing data.

You might claim that if I was a better programmer I’d have always produced a first half solution that was amenable to extension for the second half. First, I dispute that; I think there are always situations where the problem domain can change in enough directions that you can’t handle all of them without a lot of effort. Secondly, I didn’t find AoC an environment that encouraged me to optimise for generic solutions. Maybe some of the puzzles in isolation would allow for that, but a month of daily problems to solve while still engaging in regular life meant I hacked things up, took short cuts based on the knowledge I had of the input data, etc, etc.

Overall I can see the appeal, but the sheer quantity and the fact I write code as part of my day job just made it feel too much like a chore, rather than a fun mental exercise. I did wonder how they’d look as a set of interview puzzles (obviously a subset, rather than all of them), but I’m not sure how you’d actually use them for that - I wouldn’t want anyone to have to solve them in a live interview.

So, in case it’s not obvious, I’m not planning to engage in AoC again this yet. But I’m continuing to persevere with Rust (though most of my work stuff is thankfully still Go).

Using QEmu for UEFI/TPM testing

This is one of those posts that’s more for my own reference than likely to be helpful for others. If you’re unlucky it’ll have some useful tips for you. If I’m lucky then I’ll get a bunch of folk pointing out some optimisations.

First, what I’m trying to achieve. I want a virtual machine environment where I can manually do tests on random kernels, and also various TPM related experiments. So I don’t want something as fixed as a libvirt setup. I’d like the following:

  • It to be fairly lightweight, so I can run it on a laptop while usefully doing other things
  • I need a TPM 2.0 device to appear to the guest OS, but it doesn’t need to be a real TPM
  • Block device discard should work, so I can back it with a qcow2 image and use fstrim to keep the actual on disk size small, without constraining my potential for file system expansion should I need it
  • I’ve no need for graphics, in fact a serial console would be better as it eases copy & paste, especially when I screw up kernel changes

That turns out to be possible, but it took a bunch of trial and error to get there. So I’m writing it down. I generally do this on a Fedora based host system (FC40 at present, but this all worked with FC38 + FC39 too), and I run Debian 12 (bookworm) as the guest. At present I’m using qemu 8.2.2 and swtpm 0.9.0, both from the FC40 packages.

One other issue I spent too long tracking down is that the version of grub 2.06 in bookworm does not manage to pass the TPMEventLog through to the guest kernel properly. The events get measured and the PCRs updated just fine, but /sys/kernel/security/tpm0/binary_bios_measurements doesn’t even get created. Using either grub 2.06 from FC40, or the 2.12 backport in bookworm-backports, makes this work just fine.

Anyway, for reference, the following is the script I use to start the swtpm, and then qemu. The debugcon line can be dropped if you’re not interested in OVMF debug logging. This needs the guest OS to be configured up for a serial console, but avoids the overhead of graphics emulation.

As I said at the start, I’m open to any hints about other options I should be passing; as long as I get acceptable performance in the guest I care more about reducing host load than optimising for the guest.

#!/bin/sh

BASEDIR=/home/noodles/debian-qemu

if [ ! -S ${BASEDIR}/swtpm/swtpm-sock ]; then
    echo Starting swtpm:
    swtpm socket --tpmstate dir=${BASEDIR}/swtpm \
        --tpm2 \
        --ctrl type=unixio,path=${BASEDIR}/swtpm/swtpm-sock &
fi

echo Starting QEMU:
qemu-system-x86_64 -enable-kvm -m 2048 \
    -machine type=q35 \
    -smbios type=1,serial=N00DL35,uuid=fd225315-f72a-4d66-9b16-55363c6c938b \
    -drive if=pflash,format=qcow2,readonly=on,file=/usr/share/edk2/ovmf/OVMF_CODE_4M.qcow2 \
    -drive if=pflash,format=raw,file=${BASEDIR}/OVMF_VARS.fd \
    -global isa-debugcon.iobase=0x402 -debugcon file:${BASEDIR}/debian.ovmf.log \
    -device virtio-blk-pci,drive=drive0,id=virblk0 \
    -drive file=${BASEDIR}/debian-12-efi.qcow2,if=none,id=drive0,discard=on \
    -net nic,model=virtio -net user \
    -chardev socket,id=chrtpm,path=${BASEDIR}/swtpm/swtpm-sock \
    -tpmdev emulator,id=tpm0,chardev=chrtpm \
    -device tpm-tis,tpmdev=tpm0 \
    -display none \
    -nographic \
    -boot menu=on

subscribe via RSS