Support your local Hackerspace
My first Hackerspace was Noisebridge. It was full of smart and interesting people and I never felt like I belonged, but I had just moved to San Francisco and it had interesting events, like 5MoF, and provided access to basic stuff I hadn’t moved with me, like a soldering iron. While I was never a heavy user of the space I very much appreciated its presence, and availability even to non-members. People were generally welcoming, it was a well stocked space and there was always something going on.
These days my local hackerspace is Farset Labs. I don’t have a need for tooling in the same way, being lucky enough to have space at home and access to all the things I didn’t move to the US, but it’s still a space full of smart and interesting people that has interesting events. And mostly that’s how I make use of the space - I attend events there. It’s one of many venues in Belfast that are part of the regular Meetup scene, and for a while I was just another meetup attendee. A couple of things changed the way I looked at. Firstly, for whatever reason, I have more of a sense of belonging. It could be because the tech scene in Belfast is small enough that you’ll bump into the same people at wildly different events, but I think that’s true of the tech scene in most places. Secondly, I had the realisation (and this is obvious once you say it, but still) that Farset was the only non-commercial venue that was hosting these events. It’s predominantly funded by members fees; it’s not getting Invest NI or government subsidies (though I believe Weavers Court is a pretty supportive landlord).
So I became a member. It then took me several months after signing up to actually be in the space again, but I feel it’s the right thing to do; without the support of their local tech communities hackerspaces can’t thrive. I’m probably in Farset at most once a month, but I’d miss it if it wasn’t there. Plus I don’t want to see such a valuable resource disappear from the Belfast scene.
And that would be my message to you, dear reader. Support your local hackerspace. Become a member if you can afford it, donate what you can if not, or just show up and help out - as non-commercial entities things generally happen as a result of people turning up and volunteering their time to help out.
(This post prompted by a bunch of Small Charity Week tweets last week singing the praises of Farset, alongside the announcement today that Farset Labs is expanding - if you use the space and have been considering becoming a member or even just donating, now is the time to do it.)
NIDevConf 19 slides on Home Automation
The 3rd Northern Ireland Developer Conference was held yesterday, once again in Riddel Hall at QUB. It’s a good venue for a great conference and as usual it was a thoroughly enjoyable day, with talks from the usual NI suspects as well as some people who were new to me. I finally submitted a talk this year, and ended up speaking about my home automation setup - basically stringing together a bunch of the information I’ve blogged about here over the past year or so. It seemed to go well other than having a bit too much content for the allocated time, but I got the main arc covered and mostly just had to skim through the additional information. I’ve had a similar talk accepted for DebConf19 this Summer, with a longer time slot that will allow me to go into a bit more detail about how Debian has enable each of the pieces.
Slides from yesterday’s presentation are below; if you’re a regular reader I doubt there’ll be anything new and it’s a slide deck very much intended to be talked around rather than stand alone so if you weren’t there they’re probably not that useful. There’s a recording of the talk which I don’t hate as much as I thought I would (and the rest of the conference is also on the NIDevConf Youtube channel).
Note that a lot of the slides have very small links at the bottom which will take you to either a blog post expanding on the details, or an external reference I think is useful.
Also available for direct download.
More Yak Shaving: Moving to nftables to secure Home Assistant
When I setup Home Assistant last year one of my niggles was that it wanted an entire subdomain, rather than being able to live under a subdirectory. I had a desire to stick various things behind a single SSL host on my home network (my UniFi controller is the other main one), rather than having to mess about with either SSL proxies in every container running a service, or a bunch of separate host names (in particular one for the backend and one for the SSL certificate, for each service) in order to proxy in a single host.
I’ve recently done some reorganisation of my network, including building a new house server (which I’ll get round to posting about eventually) and decided to rethink the whole SSL access thing. As a starting point I had:
- Services living in their own containers
- Another container already running Apache, with SSL enabled + a valid external Let’s Encrypt certificate
And I wanted:
- SSL access to various services on the local network
- Not to have to run multiple copies of Apache (or any other TLS proxy)
- Valid SSL certs that would validate correctly on browsers without kludges
- Not to have to have things like
hass-host
as the front end name andhass-backend-host
as the actual container name.
It dawned on me that all access to the services was already being directed through the server itself, so there was a natural redirection point. I hatched a plan to do a port level redirect there, sending all HTTPS traffic to the service containers to the container running Apache. It would then be possible to limit access to the services (e.g. port 8123 for Home Assistant) to the Apache host, tightening up access, and the actual SSL certificate would have the service name in it.
First step was to figure out how to do the appropriate redirection. I was reasonably sure this would involve some sort of DNAT in iptables
, but I couldn’t find a clear indication that it was possible (there was a lot of discussion about how you also ended up needing SNAT, and I needed multiple redirections to 443 on the Apache container, so that wasn’t going to fly). Having now solved the problem I think iptables
could have done it just fine, but I ended up being steered down the nftables route. This is long overdue; it’s been available since Linux 3.13 but lacking a good reason to move beyond iptables
I hadn’t yet done so (in the same way I clung to ipfwadm
and ipchains
until I had to move).
There’s a neat tool, iptables-restore-translate
, which can take the output of iptables-save
and provide a simple translation to nftables
. That was a good start, but what was neater was moving to the inet
filter instead of ip
which then mean I could write one set of rules which applied to both IPv4 and IPv6 services. No need for rule duplication! The ability to write a single configuration file was nicer than the sh
script I had to configure iptables
as well. I expect to be able to write a cleaner set of rules as I learn more, and although it’s not relevant for the traffic levels I’m shifting I understand the rule parsing is generally more efficient if written properly.Finally there’s an nftables
systemd
service in Debian, so systemctl enable nftables
turned on processing of /etc/nftables.conf
on restart rather than futzing with a pre-up
in /etc/network/interfaces
.
With all the existing config moved over the actual redirection was easy. I added the following block to the end of nftables.conf
(I had no NAT previously in place), which redirects HTTPS traffic directed at 192.168.2.3
towards 192.168.2.2
instead.
nftables dnat configuration
table ip nat {
chain prerouting {
type nat hook prerouting priority 0
# Redirect incoming HTTPS to Home Assistant to Apache proxy
iif "enp24s0" ip daddr 192.168.2.3 tcp dport https \
dnat 192.168.2.2
}
chain postrouting {
type nat hook postrouting priority 100
}
}
I think the key here is I can guarantee that any traffic coming back from the Apache proxy is going to pass through the host doing the DNAT; each container has a point-to-point link configured rather than living on a network bridge. If there was a possibility traffic from the proxy could go direct to the requesting host (e.g. they were on a shared LAN) then you’d need to do SNAT as well so the proxy would return the traffic to the NAT host which would then redirect to the requesting host.
Apache was then configured as a reverse proxy, with my actual config ending up as follows. For now I’ve restricted access to within my house; I’m still weighing up the pros and cons of exposing access externally without the need for a tunnel. The domain I used on my internal network is a proper registered thing, so although I don’t expose any IP addresses externally I’m able to use Mythic Beasts’ DNS validation instructions and have a valid cert.
Apache proxy config for Home Assistant
<VirtualHost *:443>
ServerName hass-host
ProxyPreserveHost On
ProxyRequests off
RewriteEngine on
# Anything under /local/ we serve, otherwise proxy to Home Assistant
RewriteCond %{REQUEST_URI} '/local/.*'
RewriteRule .* - [L]
RewriteCond %{HTTP:Upgrade} =websocket [NC]
RewriteRule /(.*) ws://hass-host:8123/$1 [P,L]
ProxyPassReverse /api/websocket ws://hass-host:8123/api/websocket
RewriteCond %{HTTP:Upgrade} !=websocket [NC]
RewriteRule /(.*) http://hass-host:8123/$1 [P,L]
ProxyPassReverse / http://hass-host:8123/
SSLEngine on
SSLCertificateFile /etc/ssl/le.crt
SSLCertificateKeyFile /etc/ssl/private/le.key
SSLCertificateChainFile /etc/ssl/lets-encrypt-x3-cross-signed.crt
# Static files can be hosted here instead of via Home Assistant
Alias /local/ /srv/www/hass-host/
<Directory /srv/www/hass-host/>
Options -Indexes
</Directory>
# Only allow access from inside the house
ErrorDocument 403 "Not for you."
<Location />
Order Deny,Allow
Deny from all
Allow from 192.168.1.0/24
</Location>
</VirtualHost>
I’ve done the same for my UniFi controller; the DNAT works exactly the same, while the Apache reverse proxy config is slightly different - a change in some of the paths and config to ignore the fact there’s no valid SSL cert on the controller interface.
Apache proxy config for Unifi Controller
<VirtualHost *:443>
ServerName unifi-host
ProxyPreserveHost On
ProxyRequests off
SSLProxyEngine on
SSLProxyVerify off
SSLProxyCheckPeerCN off
SSLProxyCheckPeerName off
SSLProxyCheckPeerExpire off
AllowEncodedSlashes NoDecode
ProxyPass /wss/ wss://unifi-host:8443/wss/
ProxyPassReverse /wss/ wss://unifi-host:8443/wss/
ProxyPass / https://unifi-host:8443/
ProxyPassReverse / https://unifi-host:8443/
SSLEngine on
SSLCertificateFile /etc/ssl/le.crt
SSLCertificateKeyFile /etc/ssl/private/le.key
SSLCertificateChainFile /etc/ssl/lets-encrypt-x3-cross-signed.crt
# Only allow access from inside the house
ErrorDocument 403 "Not for you."
<Location />
Order Deny,Allow
Deny from all
Allow from 192.168.1.0/24
</Location>
</VirtualHost>
(worth pointing out that one of my other Home Assistant niggles has also been fixed - there’s now the ability to setup multiple users and separate out API access to OAuth, rather than a single password providing full access. It still needs more work in terms of ACLs for users, but that’s a bigger piece of work.)
Go Baby Go
I’m starting a new job next month and their language of choice is Go. Which means I have a good reason to finally get around to learning it (far too many years after I saw Marga talk about it at DebConf). For that I find I need a project - it’s hard to find the time to just do programming exercises, whereas if I’m working towards something it’s a bit easier. Naturally I decided to do something home automation related. In particular I bought a couple of Xiaomi Mijia Temperature/Humidity sensors a while back which also report via Bluetooth. I had a set of shell scripts polling them every so often to get the details, but it turns out they broadcast the current status every 2 seconds. Passively listening for that is a better method as it reduces power consumption on the device - no need for a 2 way handshake like with a manual poll. So, the project: passively listen for BLE advertisements, make sure they’re from the Xiaomi device and publish them via MQTT every minute.
One thing that puts me off new languages is when they have a fast moving implementation - telling me I just need to fetch the latest nightly to get all the features I’m looking for is a sure fire way to make me hold off trying something. Go is well beyond that stage, so I grabbed the 1.11 package from Debian buster. That’s only one release behind current, so I felt reasonably confident I was using a good enough variant. For MQTT the obvious choice was the Eclipse Paho MQTT client. Bluetooth was a bit trickier - there were more options than I expected (including one by Paypal), but I settled on go-ble (sadly now in archived mode), primarily because it was the first one where I could easily figure out how to passively scan without needing to hack up any of the library code.
With all those pieces it was fairly easy to throw together something that does the required steps in about 200 lines of code. That seems comparable to what I think it would have taken in Python, and to a large extent the process felt a lot closer to writing something in Python than in C.
Now, this wasn’t a big task in any way, but it was a real problem I wanted to solve and it brought together various pieces that helped provide me with an introduction to Go. I’ve a lot more to learn, but I figure I should write up my initial takeaways. There’s no mention of goroutines or channels or things like that - I’m aware of them, but I haven’t yet had a reason to use them so don’t have an informed opinion at this point.
I should point out I read Rob Pike’s Go at Google talk first, which helped understand the mindset behind Go a lot - it’s not trying to solve the same problem as Rust, for example, but very much tailored towards a set of the problems that Google see with large scale software development. Also I’m primarily coming from a background in C and C++ with a bit of Perl and Python thrown in.
The Ecosystem is richer than I expected
I was surprised at the variety of Bluetooth libraries available to me. For a while I wasn’t sure I was going to find one that could do what I needed without hackery, but most of the Python BLE modules have the same problem.
Static binaries are convenient
Go builds a mostly static binary - my tool only links against various libraries from libc, with the Bluetooth and MQTT Go modules statically linked into the executable file. With my distro minded head on I object to this; it means I need a complete rebuild in case of any modification to the underlying modules. However the machine I’m running the tool on is different than the machine I developed on and there’s no doubt that being able to copy a single binary over rather than having to worry about all the supporting bits as well is a real time saver.
The binaries are huge
This is the flip side of static binaries, I guess. My tool is a 7.6MB binary file. That’s not a problem on my AMD64 server, but even though Go seems to have Linux/MIPS support I doubt I’ll be running things built using it on my OpenWRT router. Memory usage seems sane enough, but that size of file is a significant chunk of the available flash storage for small devices.
Module versioning isn’t as horrible as I expected
A few years back I attended a Go talk locally and asked a question about module versioning and the fact that by default modules were pulled directly from Git repositories, seemingly without any form of versioning. The speaker admitted that their example code had in fact failed to compile the previous day because of a change upstream that changed an API. These days things seem better; I was pointed at go mod
and in particular setting GO111MODULE=on
for my 1.11 compiler, and when I first built my code Go created a go.mod
with a set of versioned dependencies. I’m still wary of build systems that automatically grab code from the internet, and the pinning of versions conflicts with an ability to be able to automatically rebuild and pick up module security fixes, but at least there seems to be some thought going into this these days.
I love maps
Really this is more a generic thing I miss when I write C. Perl hashes, Python dicts, Go maps. An ability to easily stash things by arbitrary reference without having to worry about reallocation of the holding structure. I haven’t delved into other features Go has over C particularly yet so I’m sure there’s more to take advantage of, but maps are a good start.
The syntax is easy enough
The syntax for Go felt comfortable enough to me. I had to look a few bits and pieces up, but nothing grated. go fmt
is a nice touch; I like the fact that modern languages are starting to have a well defined preferred style. It’s a long time since I wrote any Pascal, but as a C programmer things made sense.
I’m still not convinced about garbage collection
One of the issues I hit while developing my tool was that it would sit and spin and take more and more memory. This turned out to be a combination of some flaky Bluetooth hardware returning odd responses, and my failure to handle the returned error message. Ultimately this resulted in a resource leak causing the growing memory use. This would still have been possible without garbage collection, but I think not having to think about memory allocation/deallocation made me more complacent. Relying on the garbage collector to free up resources means you have to be sure nothing is holding a reference any more (even if it won’t use it). I think it will take further time with Go development to fully make my mind up, but for now I’m still wary.
Code, in the unlikely event it’s helpful to anyone, is on GitHub.
Setting up SSH 2FA using TOTP
I spend a lot of time connected to remote hosts. My email and IRC client live on a dedicated server with Bytemark, which makes it easy to access wherever I am in the world. I have a well connected VM for running Debian package builds on using sbuild. At home my Home Assistant setup lives in its own container. And of course that lives on a server which is in the comms room and doesn’t even have a video card installed. At work my test machines are all in the server room rather than noisily on my desk. I connect to all of these with SSH (and screen, though I keep meaning to investigate tmux more thoroughly) - I’ve been doing so since the days of dialup, I’m very happy with the command line and I generally don’t need the overhead of a remote GUI. I don’t think I’m unusual in this respect (especially among people likely to be reading this post).
One of the things I love about SSH is the ability to use SSH keys. That means I don’t have to remember passwords for hosts - they go in my password manager for emergencies, I login with them once to drop my .ssh/authorised_keys
file in place, and I forget them. For my own machines, where possible, I disable password logins entirely. However there are some hosts I want to be able to get to even without having an SSH key available, but equally would like a bit more security on. A while back I had a conversation with some local folk about the various options and decided that some sort of two-factor authentication (2FA) was an appropriate compromise; I was happy to trust an SSH key on its own, but for a password based login I wanted an extra piece of verification. I ended up putting the Google Authenticator on my phone, which despite the name is actually a generic implementation of the TOTP and HTOP one-time password algorithms. It’s turned out useful for various websites as well (in particular at work I have no phone coverage and 2FA on O365. Having Authenticator installed makes that easier than having to wave my phone near the window to get an SMS login token).
For the server side I installed the Google Authenticator PAM module, conveniently available in Debian with a simple apt install libpam-google-authenticator
. I added:
auth required pam_google_authenticator.so nullok
to /etc/pam.d/sshd
below the @include common-auth
line, and changed
ChallengeResponseAuthentication no
in /etc/ssh/sshd_config
to be yes
instead. servicectl restart sshd
restarts SSH and brings the new config into play. At this point password only logins are still ok (thanks to the nullok
above). To enable 2FA you then run google-authenticator
as your normal user. This asks a bunch of questions - I went for TOTP (i.e. time based), disabled multiple uses and turned on the rate-limiting. The tool will display an ASCII art QR code (make sure your terminal window is big enough) that can be scanned by the phone app. From this point on the account will require an authentication code after a successful password entry, but also allow SSH key only logins.
For the avoidance of doubt, this does not involve sending any information off to Google or any other network provider. TOTP/HOTP are self contained protocols, and it’s the scanning of the QR code/entering the secret key at setup time that binds the app to the server details. There are other app implementations which will work just fine.
(This post mostly serves to document the setup steps for my own reference; I set it up originally over a year ago and have just had to do so again for a new machine.)
subscribe via RSS