Burn it all
I am generally positive about my return to Northern Ireland, and decision to stay here. Things are much better than when I was growing up and there’s a lot more going on here these days. There’s an active tech scene and the quality of life is pretty decent. That said, this time of year is one that always dampens my optimism. TLDR: This post brings no joy. This is the darkest timeline.
First, we have the usual bonfire issues. I’m all for setting things on fire while having a drink, but when your bonfire is so big it leads to nearby flat residents being evacuated to a youth hostel for the night or you decide that adding 1800 tyres to your bonfire is a great idea, it’s time to question whether you’re celebrating your cultural identity while respecting those around you, or just a clampit (thanks, @Bolster). If you’re starting to displace people from their homes, or releasing lots of noxious fumes that are a risk to your health and that of your local community you need to take a hard look at the message you’re sending out.
Secondly, we have the House of Commons vote on Tuesday to amend the Northern Ireland (Executive Formation) Bill to require the government to bring forward legislation to legalise same-sex marriage and abortion in Northern Ireland. On the face of it this is a good thing; both are things the majority of the NI population want legalised and it’s an area of division between us and the rest of the UK (and, for that matter, Ireland). Dig deeper and it doesn’t tell a great story about the Northern Ireland Assembly. The bill is being brought in the first place because (at the time of writing) it’s been 907 days since Northern Ireland had a government. The current deadline for forming an executive is August 25th, or another election must be held. The bill extends this to October 21st, with an option to extend it further to January 13th. That’ll be 3 years since the assembly sat. That’s not what I voted for; I want my elected officials to actually do their jobs - I may not agree with all of their views, but it serves NI much more to have them turning up and making things happen than failing to do so. Especially during this time of uncertainty about borders and financial stability.
It’s also important to note that the amendments only kick in if an executive is not formed by October 21st - if there’s a functioning local government it’s expected to step in and enact the appropriate legislation to bring NI into compliance with its human rights obligations, as determined by the Supreme Court. It’s possible that this will provide some impetus to the DUP to re-form the assembly in NI. Equally it’s possible that it will make it less likely that Sinn Fein will rush to re-form it, as both amendments cover issues they have tried to resolve in the past.
Equally while I’m grateful to Stella Creasy and Conor McGinn for proposing these amendments, it’s a rare example of Westminster appearing to care about Northern Ireland at all. The ‘backstop’ has been bandied about as a political football, with more regard paid to how many points Tory leadership contenders can score off each other than what the real impact will be upon the people in Northern Ireland. It’s the most attention anyone has paid to us since the Good Friday Agreement, but it’s not exactly the right sort of attention.
I don’t know what the answer is. Since the GFA politics in Northern Ireland has mostly just got more polarised rather than us finding common ground. The most recent EU elections returned an Alliance MEP, Naomi Long, for the first time, which is perhaps some sign of a move to non-sectarian politics, but the real test would be what a new Assembly election would look like. I don’t hold out any hope that we’d get a different set of parties in power.
Still, I suppose at least it’s a public holiday today. Here’s hoping the pub is open for lunch.
Support your local Hackerspace
My first Hackerspace was Noisebridge. It was full of smart and interesting people and I never felt like I belonged, but I had just moved to San Francisco and it had interesting events, like 5MoF, and provided access to basic stuff I hadn’t moved with me, like a soldering iron. While I was never a heavy user of the space I very much appreciated its presence, and availability even to non-members. People were generally welcoming, it was a well stocked space and there was always something going on.
These days my local hackerspace is Farset Labs. I don’t have a need for tooling in the same way, being lucky enough to have space at home and access to all the things I didn’t move to the US, but it’s still a space full of smart and interesting people that has interesting events. And mostly that’s how I make use of the space - I attend events there. It’s one of many venues in Belfast that are part of the regular Meetup scene, and for a while I was just another meetup attendee. A couple of things changed the way I looked at. Firstly, for whatever reason, I have more of a sense of belonging. It could be because the tech scene in Belfast is small enough that you’ll bump into the same people at wildly different events, but I think that’s true of the tech scene in most places. Secondly, I had the realisation (and this is obvious once you say it, but still) that Farset was the only non-commercial venue that was hosting these events. It’s predominantly funded by members fees; it’s not getting Invest NI or government subsidies (though I believe Weavers Court is a pretty supportive landlord).
So I became a member. It then took me several months after signing up to actually be in the space again, but I feel it’s the right thing to do; without the support of their local tech communities hackerspaces can’t thrive. I’m probably in Farset at most once a month, but I’d miss it if it wasn’t there. Plus I don’t want to see such a valuable resource disappear from the Belfast scene.
And that would be my message to you, dear reader. Support your local hackerspace. Become a member if you can afford it, donate what you can if not, or just show up and help out - as non-commercial entities things generally happen as a result of people turning up and volunteering their time to help out.
(This post prompted by a bunch of Small Charity Week tweets last week singing the praises of Farset, alongside the announcement today that Farset Labs is expanding - if you use the space and have been considering becoming a member or even just donating, now is the time to do it.)
NIDevConf 19 slides on Home Automation
The 3rd Northern Ireland Developer Conference was held yesterday, once again in Riddel Hall at QUB. It’s a good venue for a great conference and as usual it was a thoroughly enjoyable day, with talks from the usual NI suspects as well as some people who were new to me. I finally submitted a talk this year, and ended up speaking about my home automation setup - basically stringing together a bunch of the information I’ve blogged about here over the past year or so. It seemed to go well other than having a bit too much content for the allocated time, but I got the main arc covered and mostly just had to skim through the additional information. I’ve had a similar talk accepted for DebConf19 this Summer, with a longer time slot that will allow me to go into a bit more detail about how Debian has enable each of the pieces.
Slides from yesterday’s presentation are below; if you’re a regular reader I doubt there’ll be anything new and it’s a slide deck very much intended to be talked around rather than stand alone so if you weren’t there they’re probably not that useful. There’s a recording of the talk which I don’t hate as much as I thought I would (and the rest of the conference is also on the NIDevConf Youtube channel).
Note that a lot of the slides have very small links at the bottom which will take you to either a blog post expanding on the details, or an external reference I think is useful.
Also available for direct download.
More Yak Shaving: Moving to nftables to secure Home Assistant
When I setup Home Assistant last year one of my niggles was that it wanted an entire subdomain, rather than being able to live under a subdirectory. I had a desire to stick various things behind a single SSL host on my home network (my UniFi controller is the other main one), rather than having to mess about with either SSL proxies in every container running a service, or a bunch of separate host names (in particular one for the backend and one for the SSL certificate, for each service) in order to proxy in a single host.
I’ve recently done some reorganisation of my network, including building a new house server (which I’ll get round to posting about eventually) and decided to rethink the whole SSL access thing. As a starting point I had:
- Services living in their own containers
- Another container already running Apache, with SSL enabled + a valid external Let’s Encrypt certificate
And I wanted:
- SSL access to various services on the local network
- Not to have to run multiple copies of Apache (or any other TLS proxy)
- Valid SSL certs that would validate correctly on browsers without kludges
- Not to have to have things like
hass-host
as the front end name andhass-backend-host
as the actual container name.
It dawned on me that all access to the services was already being directed through the server itself, so there was a natural redirection point. I hatched a plan to do a port level redirect there, sending all HTTPS traffic to the service containers to the container running Apache. It would then be possible to limit access to the services (e.g. port 8123 for Home Assistant) to the Apache host, tightening up access, and the actual SSL certificate would have the service name in it.
First step was to figure out how to do the appropriate redirection. I was reasonably sure this would involve some sort of DNAT in iptables
, but I couldn’t find a clear indication that it was possible (there was a lot of discussion about how you also ended up needing SNAT, and I needed multiple redirections to 443 on the Apache container, so that wasn’t going to fly). Having now solved the problem I think iptables
could have done it just fine, but I ended up being steered down the nftables route. This is long overdue; it’s been available since Linux 3.13 but lacking a good reason to move beyond iptables
I hadn’t yet done so (in the same way I clung to ipfwadm
and ipchains
until I had to move).
There’s a neat tool, iptables-restore-translate
, which can take the output of iptables-save
and provide a simple translation to nftables
. That was a good start, but what was neater was moving to the inet
filter instead of ip
which then mean I could write one set of rules which applied to both IPv4 and IPv6 services. No need for rule duplication! The ability to write a single configuration file was nicer than the sh
script I had to configure iptables
as well. I expect to be able to write a cleaner set of rules as I learn more, and although it’s not relevant for the traffic levels I’m shifting I understand the rule parsing is generally more efficient if written properly.Finally there’s an nftables
systemd
service in Debian, so systemctl enable nftables
turned on processing of /etc/nftables.conf
on restart rather than futzing with a pre-up
in /etc/network/interfaces
.
With all the existing config moved over the actual redirection was easy. I added the following block to the end of nftables.conf
(I had no NAT previously in place), which redirects HTTPS traffic directed at 192.168.2.3
towards 192.168.2.2
instead.
nftables dnat configuration
table ip nat {
chain prerouting {
type nat hook prerouting priority 0
# Redirect incoming HTTPS to Home Assistant to Apache proxy
iif "enp24s0" ip daddr 192.168.2.3 tcp dport https \
dnat 192.168.2.2
}
chain postrouting {
type nat hook postrouting priority 100
}
}
I think the key here is I can guarantee that any traffic coming back from the Apache proxy is going to pass through the host doing the DNAT; each container has a point-to-point link configured rather than living on a network bridge. If there was a possibility traffic from the proxy could go direct to the requesting host (e.g. they were on a shared LAN) then you’d need to do SNAT as well so the proxy would return the traffic to the NAT host which would then redirect to the requesting host.
Apache was then configured as a reverse proxy, with my actual config ending up as follows. For now I’ve restricted access to within my house; I’m still weighing up the pros and cons of exposing access externally without the need for a tunnel. The domain I used on my internal network is a proper registered thing, so although I don’t expose any IP addresses externally I’m able to use Mythic Beasts’ DNS validation instructions and have a valid cert.
Apache proxy config for Home Assistant
<VirtualHost *:443>
ServerName hass-host
ProxyPreserveHost On
ProxyRequests off
RewriteEngine on
# Anything under /local/ we serve, otherwise proxy to Home Assistant
RewriteCond %{REQUEST_URI} '/local/.*'
RewriteRule .* - [L]
RewriteCond %{HTTP:Upgrade} =websocket [NC]
RewriteRule /(.*) ws://hass-host:8123/$1 [P,L]
ProxyPassReverse /api/websocket ws://hass-host:8123/api/websocket
RewriteCond %{HTTP:Upgrade} !=websocket [NC]
RewriteRule /(.*) http://hass-host:8123/$1 [P,L]
ProxyPassReverse / http://hass-host:8123/
SSLEngine on
SSLCertificateFile /etc/ssl/le.crt
SSLCertificateKeyFile /etc/ssl/private/le.key
SSLCertificateChainFile /etc/ssl/lets-encrypt-x3-cross-signed.crt
# Static files can be hosted here instead of via Home Assistant
Alias /local/ /srv/www/hass-host/
<Directory /srv/www/hass-host/>
Options -Indexes
</Directory>
# Only allow access from inside the house
ErrorDocument 403 "Not for you."
<Location />
Order Deny,Allow
Deny from all
Allow from 192.168.1.0/24
</Location>
</VirtualHost>
I’ve done the same for my UniFi controller; the DNAT works exactly the same, while the Apache reverse proxy config is slightly different - a change in some of the paths and config to ignore the fact there’s no valid SSL cert on the controller interface.
Apache proxy config for Unifi Controller
<VirtualHost *:443>
ServerName unifi-host
ProxyPreserveHost On
ProxyRequests off
SSLProxyEngine on
SSLProxyVerify off
SSLProxyCheckPeerCN off
SSLProxyCheckPeerName off
SSLProxyCheckPeerExpire off
AllowEncodedSlashes NoDecode
ProxyPass /wss/ wss://unifi-host:8443/wss/
ProxyPassReverse /wss/ wss://unifi-host:8443/wss/
ProxyPass / https://unifi-host:8443/
ProxyPassReverse / https://unifi-host:8443/
SSLEngine on
SSLCertificateFile /etc/ssl/le.crt
SSLCertificateKeyFile /etc/ssl/private/le.key
SSLCertificateChainFile /etc/ssl/lets-encrypt-x3-cross-signed.crt
# Only allow access from inside the house
ErrorDocument 403 "Not for you."
<Location />
Order Deny,Allow
Deny from all
Allow from 192.168.1.0/24
</Location>
</VirtualHost>
(worth pointing out that one of my other Home Assistant niggles has also been fixed - there’s now the ability to setup multiple users and separate out API access to OAuth, rather than a single password providing full access. It still needs more work in terms of ACLs for users, but that’s a bigger piece of work.)
Go Baby Go
I’m starting a new job next month and their language of choice is Go. Which means I have a good reason to finally get around to learning it (far too many years after I saw Marga talk about it at DebConf). For that I find I need a project - it’s hard to find the time to just do programming exercises, whereas if I’m working towards something it’s a bit easier. Naturally I decided to do something home automation related. In particular I bought a couple of Xiaomi Mijia Temperature/Humidity sensors a while back which also report via Bluetooth. I had a set of shell scripts polling them every so often to get the details, but it turns out they broadcast the current status every 2 seconds. Passively listening for that is a better method as it reduces power consumption on the device - no need for a 2 way handshake like with a manual poll. So, the project: passively listen for BLE advertisements, make sure they’re from the Xiaomi device and publish them via MQTT every minute.
One thing that puts me off new languages is when they have a fast moving implementation - telling me I just need to fetch the latest nightly to get all the features I’m looking for is a sure fire way to make me hold off trying something. Go is well beyond that stage, so I grabbed the 1.11 package from Debian buster. That’s only one release behind current, so I felt reasonably confident I was using a good enough variant. For MQTT the obvious choice was the Eclipse Paho MQTT client. Bluetooth was a bit trickier - there were more options than I expected (including one by Paypal), but I settled on go-ble (sadly now in archived mode), primarily because it was the first one where I could easily figure out how to passively scan without needing to hack up any of the library code.
With all those pieces it was fairly easy to throw together something that does the required steps in about 200 lines of code. That seems comparable to what I think it would have taken in Python, and to a large extent the process felt a lot closer to writing something in Python than in C.
Now, this wasn’t a big task in any way, but it was a real problem I wanted to solve and it brought together various pieces that helped provide me with an introduction to Go. I’ve a lot more to learn, but I figure I should write up my initial takeaways. There’s no mention of goroutines or channels or things like that - I’m aware of them, but I haven’t yet had a reason to use them so don’t have an informed opinion at this point.
I should point out I read Rob Pike’s Go at Google talk first, which helped understand the mindset behind Go a lot - it’s not trying to solve the same problem as Rust, for example, but very much tailored towards a set of the problems that Google see with large scale software development. Also I’m primarily coming from a background in C and C++ with a bit of Perl and Python thrown in.
The Ecosystem is richer than I expected
I was surprised at the variety of Bluetooth libraries available to me. For a while I wasn’t sure I was going to find one that could do what I needed without hackery, but most of the Python BLE modules have the same problem.
Static binaries are convenient
Go builds a mostly static binary - my tool only links against various libraries from libc, with the Bluetooth and MQTT Go modules statically linked into the executable file. With my distro minded head on I object to this; it means I need a complete rebuild in case of any modification to the underlying modules. However the machine I’m running the tool on is different than the machine I developed on and there’s no doubt that being able to copy a single binary over rather than having to worry about all the supporting bits as well is a real time saver.
The binaries are huge
This is the flip side of static binaries, I guess. My tool is a 7.6MB binary file. That’s not a problem on my AMD64 server, but even though Go seems to have Linux/MIPS support I doubt I’ll be running things built using it on my OpenWRT router. Memory usage seems sane enough, but that size of file is a significant chunk of the available flash storage for small devices.
Module versioning isn’t as horrible as I expected
A few years back I attended a Go talk locally and asked a question about module versioning and the fact that by default modules were pulled directly from Git repositories, seemingly without any form of versioning. The speaker admitted that their example code had in fact failed to compile the previous day because of a change upstream that changed an API. These days things seem better; I was pointed at go mod
and in particular setting GO111MODULE=on
for my 1.11 compiler, and when I first built my code Go created a go.mod
with a set of versioned dependencies. I’m still wary of build systems that automatically grab code from the internet, and the pinning of versions conflicts with an ability to be able to automatically rebuild and pick up module security fixes, but at least there seems to be some thought going into this these days.
I love maps
Really this is more a generic thing I miss when I write C. Perl hashes, Python dicts, Go maps. An ability to easily stash things by arbitrary reference without having to worry about reallocation of the holding structure. I haven’t delved into other features Go has over C particularly yet so I’m sure there’s more to take advantage of, but maps are a good start.
The syntax is easy enough
The syntax for Go felt comfortable enough to me. I had to look a few bits and pieces up, but nothing grated. go fmt
is a nice touch; I like the fact that modern languages are starting to have a well defined preferred style. It’s a long time since I wrote any Pascal, but as a C programmer things made sense.
I’m still not convinced about garbage collection
One of the issues I hit while developing my tool was that it would sit and spin and take more and more memory. This turned out to be a combination of some flaky Bluetooth hardware returning odd responses, and my failure to handle the returned error message. Ultimately this resulted in a resource leak causing the growing memory use. This would still have been possible without garbage collection, but I think not having to think about memory allocation/deallocation made me more complacent. Relying on the garbage collector to free up resources means you have to be sure nothing is holding a reference any more (even if it won’t use it). I think it will take further time with Go development to fully make my mind up, but for now I’m still wary.
Code, in the unlikely event it’s helpful to anyone, is on GitHub.
subscribe via RSS