I wrote a month ago about getting native IPv6 over fibre. Given that this week RIPE announced they’d run out of IPv4 allocations and today my old connection was turned off, it seems an appropriate point to look at my v4 vs v6 usage over a month (as a reminder, I used my new FTTP connection for v6 only over the past month, with v4 still going over my old FTTC connection). I’m actually surprised at the outcome:
That’s telling me I received 1.8 times as much traffic via IPv6 over the past month as I did over IPv4. Even discounting my backups (the 2 v6 peaks), which could account for up to half of that, that means IPv6 and IPv4 are about equal. That’s with all internal networks doing both and no attempt at traffic shaping between them - everything’s free to pick their preference.
I don’t have a breakdown of what went where, but if you run a network and you’re not v6 enabled, why not? From my usage at least you’re getting towards being in the minority.
Last week I changed ISP. My primary reason was to get native IPv6 at home. As a side effect I’ve lowered my monthly costs and moved from VDSL2 (Fibre To The Cabinet/FTTC) to GPON (Fibre To The Premises/FTTP). But trust me when I say the thing that prompted the move was the desire for native v6.
First, some words of thanks to my previous ISP. I was with MCL Services who have been absolutely fantastic; no issues with service, and responsive support when I had queries. The problem was that they’re a Gamma reseller, and Gamma are showing no signs of enabling v6 (I had Daniel poke them several times, because even a rough ETA would have kept me hanging around to see if they made good on it).
What caused me to even start looking elsewhere was BT mailshotting me about the fact I’m in a Fibre First area and FTTP was thus now available to me. They dangled some pretty attractive pricing in front of me (£50/month for 300M/50M). BT have enabled v6 across their consumer network (and should be applauded for that), but unfortunately don’t provide a static v6 range as part of that. One of the things I wanted was to give my internal hosts static IPs. A dynamic range doesn’t allow for that. So BT was a no.
Conveniently enough there’d been a thread on the debian-uk mailing list about server-friendly ISPs. I’m not looking to run services on the end of my broadband line - as long as I can SSH in and provided a basic HTTPS endpoint for some remote services to call in that’s perfect - but a mention of Aquiss came up as a competent option. I was already aware of them as I know several existing users, and I knew they use Entanet to provide pieces of their service. Enta are long time IPv6 supporters, so I took a look. And discovered that I could move to an equivalent service to what I was on, except over fibre and for cheaper (because there was no need to pay for phone line rental I wasn’t using). No brainer.
So last Thursday an engineer from Openreach turned up. Like last time the job was bigger than expected (I think the Openreach database has just failed to record the fact the access isn’t where they think it is). Also like last time they didn’t just go away, but instead arranged for another engineer to turn up to help with the two-man bit of the job, and got it all done that day. The only worrying bit was when my existing line went down - FTTP is a brand new install rather than a migration - but that turned out to be because they run a new hybrid cable from the pole with both fibre and copper on it. Once the new cable was spliced back in the existing connection came back fine. Total outage was just over an hour - something to be aware of if you’re trying to work from home during the install like I was. Thankfully I have enough spare data on my Three contract that I was able to keep working.
A picture of the ONT as installed is above; it’s a new style one with no battery backup and a single phone port + ethernet port. I had it placed beside my existing master socket, because that’s where everything is currently situated, but I was given the option to have it placed elsewhere. There’s a wall-wart for power, so you do need a free socket. The ethernet port provides a GigE connection (even though my line is currently only configured for 80M/20M), and it does PPPoE - no VLANs or anything required, though you do need the username/password from your ISP for CHAP authentication, which looks exactly like a normal ADSL username/password.
I rejigged my OpenWRT setup so I had a spare port on the HomeHub 5A, then configured up a “wan2” interface with the PPPoE login details and IPv6 enabled:
config interface 'wan2' option ifname 'eth0.100' option proto 'pppoe' option username 'noodles@fttp' option password 'gimmev6fttp' option ipv6 '1' option ip6prefix '2001:xxxx:yyyy:zz00::/56' option defaultroute 0
(I’d put the spare port into VLAN 100, hence
For the moment I’m using the old line for IPv4 (I have a 30 day notice on it) and the new line for just IPv6, hence setting
0. I actually end up with more IPv6 traffic than I’d expect (though there’d be more if my TV did v6 for Netflix):
I had to do a bunch of internal reconfiguration as well; I’d previously used a Hurricane Electric tunnel, but only enabled it for certain hosts (I couldn’t saturate my connection over the tunnel). Now I have native IPv6 I wanted everything configured up properly, with internal DNS properly sorted so internal traffic tried to use v6 where possible. That means my MQTT broker is doing v6 (though unfortunately not for my ESP8266 devices), and I’m accessing my Home Assistant instance over v6 (needed
server_host: ::0 in the
http configuration section to make it listen on v6, and stops it listening on v4. Not a problem for me as I front it with an SSL proxy that can do both). Equally SSH to all my internal hosts and containers is now over v6.
Of course, ultimately there’s no real external visible indication of the fact things are using IPv6, even for external bits. Which is exactly as it should be.
Yesterday was mostly a day of doing upgrades (as well as the usual Saturday things like a parkrun). First on the list was finally upgrading my last major machine from Stretch to Buster. I’d done everything else and wasn’t expecting any major issues, but it runs more things so there’s more generally the opportunity for problems. Some notes I took along the way:
apt upgradeupdated collectd but due to the loose dependencies (the collectd package has a lot of plugins and chooses not to depend on anything other than what the core needs)
libsensors5was not pulled in so the daemon restart failed. This made apt/dpkg unhappy until I manually pulled in
- My custom fail2ban rule to drop spammers trying to register for wiki accounts too frequently needed the addition of a
datepatternentry to work properly.
- The new version of
python-flaskext.wtfcaused deprecation warnings from a custom site I run, fixed by moving from
FlaskForm. I still have a problem with a
TypeError: __init__() takes at most 2 arguments (3 given)error to track down.
- ejabberd is still a pain. This time the change of the erlang node name caused issues with it starting up. There was a note in
NEWS.Debianbut the failure still wasn’t obvious at first.
- For most of the config files that didn’t update automatically I just did a manual
vimdiffand pulled in the updated comments; the changes were things I wanted to keep like custom SSL certificate configuration or similar.
- PostgreSQL wasn’t as smooth as last time. A
pg_upgradecluster 9.6 mainmostly did the right thing (other than taking ages to migrate the SpamAssassin Bayes database), but left 9.6 still running rather than 11.
- I’m hitting #924178 in my duplicity backups - they’re still working ok, but might be time to finally try restic
All in all it went smoothly enough; the combination of a lot of packages and the PostgreSQL migration caused most of the time. Perhaps it’s time to look at something with SSDs rather than spinning rust (definitely something to check out when I’m looking for alternative hosts).
The other major-ish upgrade was taking my house router (a repurposed BT HomeHub 5A) from OpenWRT 18.06.2 to 18.06.4. Not as big as the Debian upgrade, but with the potential to leave me with non-working broadband if it went wrong1. The CLI sysupgrade approach worked fine (as it has in the past), helped by the fact I’ve added my MQTT and SSL configuration files to
/etc/sysupgrade.conf so they get backed up and restored. OpenWRT does a full reflash for an upgrade given the tight flash constraints, so other than the config files that get backed up you need to restore anything extra. This includes non-default packages that were installed, so I end up with something like
opkg update opkg install mosquitto-ssl 6in4 ip-tiny picocom kmod-usb-serial-pl2303
And then I have a custom compile of the
collectd-mod-network package to enable encryption, and my mqtt-arp tool to watch for house occupants:
opkg install collectd collectd-mod-cpu collectd-mod-interface \ collectd-mod-load collectd-mod-memory collectd-mod-sensors \ /tmp/collectd-mod-network_5.8.1-1_mips_24kc.ipk opkg install /tmp/mqtt-arp_1.0-1_mips_24kc.ipk
One thing that got me was the fact that installing the 6to4 package didn’t give me working v6, I had to restart the router for it to even configure up its v6 interfaces. Not a problem, just didn’t notice for a few hours2.
While I was on a roll I upgraded the kernel on my house server to the latest stable release, and Home Assistant to 0.100.2. As expected neither had any hiccups.
It’s Ada Lovelace day and I’ve been lax in previous years about celebrating some of the talented women in technology I know or follow on the interwebs. So, to make up for it, here are 5 amazing technologists.
I was initially aware of Allison through her work on Perl, was vaguely aware of the fact she was working on Ubunutu, briefly overlapped with her at HPE (and thought it was impressive HP were hiring such high calibre of Free Software folk) when she was working on OpenStack, and have had the pleasure of meeting her in person due to the fact we both work on Debian. In the continuing theme of being able to do all things tech she’s currently studying a PhD at Cambridge (the real one), and has already written a fascinating paper about about the security misconceptions around virtual machines and containers. She’s also been doing things with home automation, properly, with local speech recognition rather than relying on any external assistant service (I will, eventually, find the time to follow her advice and try this out for myself).
Graphics are one of the many things I just can’t do. I’m not artistic and I’m in awe of anyone who is capable of wrangling bits to make computers do graphical magic. People who can reverse engineer graphics hardware that would otherwise only be supported by icky binary blobs impress me even more. Alyssa is such a person, working on the Panfrost driver for ARM’s Mali Midgard + Bifrost GPUs. The lack of a Free driver stack for this hardware is a real problem for the ARM ecosystem and she has been tirelessly working to bring this to many ARM based platforms. I was delighted when I saw one of my favourite Free Software consultancies, Collabora, had given her an internship over the summer. (Selfishly I’m hoping it means the Gemini PDA will eventually be able to run an upstream kernel with accelerated graphics.)
The first time I saw Angie talk it was about the user experience of Virtual Reality, and how it had an entirely different set of guidelines to conventional user interfaces. In particular the premise of not trying to shock or surprise the user while they’re in what can be a very immersive environment. Obvious once someone explains it to you! Turns out she was also involved in the early days of custom PC builds and internet cafes in Northern Ireland, and has interesting stories to tell. These days she’s concentrating on cyber security - I’ve her to thank for convincing me to persevere with Ghidra - having looked at Bluetooth security as part of her Masters. She’s also deeply aware of the implications of the GDPR and has done some interesting work on thinking about how it affects the computer gaming industry - both from the perspective of the author, and the player.
There aren’t enough people out there who understand law and technology well. Karen is one of the few I’ve encountered who do, and not only that, but really, really gets Free software and the impact of the four freedoms on users in a way many pure technologists do not. She’s had a successful legal career that’s transitioned into being the general counsel for the Software Freedom Law Center, been the executive director of GNOME and is now the executive director of the Software Freedom Conservancy. As someone who likes to think he knows a little bit about law and technology I found Karen’s wealth of knowledge and eloquence slightly intimidating the first time I saw her speak (I think at some event in San Francisco), but I’ve subsequently (gratefully) discovered she has an incredible amount of patience (and ability) when trying to explain the nuances of free software legal issues.
At the past two DebConfs Thomas Goirand of infomaniak has run a workshop on using a Yubikey, and been generous enough to provide a number of devices for Debian folk. Last year I was fortunate enough to get hold of one of the devices on offer.
My primary use for the device is to hold my PGP key. Generally my OpenPGP hardware token of choice is the Gnuk, which features a completely Free software stack and an open hardware design, but the commonly available devices suffer from being a bit more fragile than I’d like to regularly carry around with me. The Yubikey has a much more robust design, being a slim plastic encapsulated device. I finally set it up properly with my PGP key last November, and while I haven’t attached it to my keyring I’ve been carrying it with me regularly.
Firstly, it’s been perfectly fine from a physical robustness point of view. I don’t worry about it being in my pocket with keys or change, it gets thrown into my bag at the end of the day when I go home, it kicks around my desk and occasionally gets stuff dropped on it. I haven’t tried to break it deliberately and I’m not careless with it, but it’s not treated with kid gloves. And it’s still around nearly a year later. So that’s good.
Secondly, I find my initial expected use case (holding my PGP subkeys and using the auth subkey for SSH access) is the major use I have for the key. I occasionally use the signing subkey for doing Debian uploads, I rarely use the encryption subkey, but I use the auth subkey most days. I’ve also setup U2F for any site I use that supports it, but generally once I’m logged in there on trusted machines I don’t need to regularly re-use it. It’s nice to have though, and something the Gnuk doesn’t offer.
On the down side, I still want a device that requires a physical key press for any signing operation. My preferred use case is leaving the key plugged into the machine to handle SSH logins, but the U2F use case seems to be to insert the key only when needed, and then press the key. OpenPGP operation with the Yubikey doesn’t require a physical touch. I get round some of this by enabling the
confirm option with gpg-agent, but I’d still be happier with something on the token itself. The Yubikey also doesn’t do ECC keys, but it does do 4096-bit RSA so it’s not terrible, just results in larger keys than ideal.
Overall I’m happy with the device, and grateful to Thomas and infomaniak for providing me with it, though I’m hopeful about a new version of the Gnuk with a more robust form factor/casing. (If you’re looking for discussion on how to setup the token with GPG subkeys then I recommend Thomas’ presentation from 2018, which covers all the steps required.)
Update: It’s been pointed out to me by several people that the Yubikey can be configured to require a touch for OpenPGP usage; either using
ykman or yubitouch.
subscribe via RSS