Home Automation: Getting started with MQTT

May 10, 2018 / 0 comments

I’ve been thinking about trying to sort out some home automation bits. I’ve moved from having a 7 day heating timer to a 24 hour timer and I’d forgotten how annoying that is at weekends. I’d like to monitor temperatures in various rooms and use that, along with presence detection, to be a bit more intelligent about turning the heat on. Equally I wouldn’t mind tying my Alexa in to do some voice control of lighting (eventually maybe even using DeepSpeech to keep everything local).

Before all of that I need to get the basics in place. This is the first in a series of posts about putting together the right building blocks to allow some useful level of home automation / central control. The first step is working out how to glue everything together. A few years back someone told me MQTT was the way forward for IoT applications, being more lightweight than a RESTful interface and thus better suited to small devices. At the time I wasn’t convinced, but it seems they were right and MQTT is one of the more popular ways of gluing things together.

I found the HiveMQ series on MQTT Essentials to be a pretty good intro; my main takeaway was that MQTT allows for a single message broker to enable clients to publish data and multiple subscribers to consume that data. TLS is supported for secure data transfer and there’s a whole bunch of different brokers and client libraries available. The use of a broker is potentially helpful in dealing with firewalling; clients and subscribers only need to talk to the broker, rather than requiring any direct connection.

With all that in mind I decided to set up a broker to play with the basics. I made the decision that it should run on my OpenWRT router - all the devices I want to hook up can easily see that device, and if it’s down then none of them are going to be able to get to a broker hosted anywhere else anyway. I’d seen plenty of info about Mosquitto and it’s already in the OpenWRT package repository. So I sorted out a Let’s Encrypt cert, installed Moquitto and created a couple of test users:

opkg install mosquitto-ssl
mosquitto_passwd -b /etc/mosquitto/mosquitto.users user1 foo
mosquitto_passwd -b /etc/mosquitto/mosquitto.users user2 bar
chown mosquitto /etc/mosquitto/mosquitto.users
chmod 600 /etc/mosquitto/mosquitto.users

I then edited /etc/mosquitto/mosquitto.conf and made sure the following are set. In particular you need cafile set in order to enable TLS:

port 8883
cafile /etc/ssl/lets-encrypt-x3-cross-signed.pem
certfile /etc/ssl/mqtt.crt
keyfile /etc/ssl/mqtt.key

log_dest syslog

allow_anonymous false

password_file /etc/mosquitto/mosquitto.users
acl_file /etc/mosquitto/mosquitto.acl

Finally I created /etc/mosquitto/mosquitto.acl with the following:

user user1
topic readwrite #

user user2
topic read ro/#
topic readwrite test/#

That gives me user1 who has full access to everything, and user2 with readonly access to the ro/ tree and read/write access to the test/ tree.

To test everything was working I installed mosquitto-clients on a Debian test box and in one window ran:

mosquitto_sub -h mqtt-host -p 8883 --capath /etc/ssl/certs/ -v -t '#' -u user1 -P foo

and in another:

mosquitto_pub -h mqtt-host -p 8883 --capath /etc/ssl/certs/ -t 'test/message' -m 'Hello World!' -u user2 -P bar

(without the --capath it’ll try a plain TCP connection rather than TLS, and not produce a useful error message) which resulted in the mosquitto_sub instance outputting:

test/message Hello World!

Trying:

mosquitto_pub -h mqtt-host -p 8883 --capath /etc/ssl/certs/ -t 'test2/message' -m 'Hello World!' -u user2 -P bar

resulted in no output due to the ACL preventing it. All good and ready to actually make use of - of which more later.

The excellent selection of Belfast Tech conferences

May 4, 2018 / 0 comments

Yesterday I was lucky enough to get to speak at BelTech, giving what I like to think of as a light-hearted rant entitled “10 Stupid Reasons You’re Not Using Free Software”. It’s based on various arguments I’ve heard throughout my career about why companies shouldn’t use or contribute to Free Software, and, as I’m sure you can guess, I think they’re mostly bad arguments. I only had a 20 minute slot for it, which was probably about right, and it seemed to go down fairly well. Normally the conferences I would pitch talks to would end up with me preaching to the converted, but I felt this one provided an audience who were probably already using Free software but hadn’t thought about it that much.

And that got me to thinking “Isn’t it fantastic that we have this range of events and tech groups in Belfast?”. I remember the days when the only game in town was the Belfast LUG (now on something like its 5th revival and still going strong), but these days you could spend every night of the month at a different tech event covering anything from IoT to Women Who Code to DevOps to FinTech. There’s a good tech community that’s built up, with plenty of cross over between the different groups.

An indicator of that is the number of conferences happening in the city, with many of them now regular fixtures in the annual calendar. In addition to BelTech I’ve already attended BelFOSS and Women Techmakers this year. Product Camp Belfast is happening today. NIDevConf is just over a month away (I’ll miss this year due to another commitment, but thoroughly enjoyed last year). WordCamp Belfast isn’t the sort of thing I’d normally notice, but the opportunity to see Heather Burns speak on the GDPR is really tempting. Asking around about what else is happening turned up B-Sides, Big Data Belfast and DigitalDNA.

How did we end up with such a vibrant mix of events (and no doubt many more I haven’t noticed)? They might not be major conferences that pull in an international audience, but in some ways I find that more heartening - there’s enough activity in the local tech scene to make this number of events make sense. And I think that’s pretty cool.

Using collectd for Exim stats

Apr 25, 2018 / 0 comments

I like graphing things; I find it’s a good way to look for abnormal patterns or try to track down the source of problems. For monitoring systems I started out with MRTG. It’s great for monitoring things via SNMP, but everything else needs some custom scripts. So at one point I moved my home network over to Munin, which is much better at graphing random bits and pieces, and coping with collecting data from remote hosts. Unfortunately it was quite heavyweight on the Thecus N2100 I was running as the central collection point at the time; data collection resulted in a lot of forking and general sluggishness. So I moved to collectd, which is written in C, relies much more on compiled plugins and doesn’t do a load of forks. It also supports a UDP based network protocol with authentication + encryption, which makes it great for running on hosts that aren’t always up - the collection point doesn’t hang around waiting for them when they’re not around.

The problem is that when it comes to things collectd doesn’t support out of the box it’s not quite so easy to get the stats - things a simple script would sort in MRTG need a bit more thought. You can go the full blown Python module route as I did for my Virgin Super Hub scripts, but that requires a bit of work. One of the things in particular I wanted to graph were stats for my mail servers and having to write a chunk of Python to do that seemed like overkill. Searching around found the Tail plugin, which follows a log file and applies regexes to look for stats. There are some examples for Exim on that page, but none were quite what I wanted. In case it’s of interest/use to anyone else, here’s what I ended up with (on Debian, of course, but I can’t see why it wouldn’t work elsewhere with minimal changes).

First I needed a new data set specification for email counts. I added this to /usr/share/collectd/types.db:

mail_count              value:COUNTER:0:65535

Note if you’re logging to a remote collectd host this needs to be on both the host where the stats are collected and the one receiving the stats.

I then dropped a file in /etc/collectd/collectd.conf.d/ called exim.conf containing the following. It’ll need tweaked depending on exactly what you log, but the first 4 <Match> stanzas should be generally useful. I have some additional logging (via log_message entries in the exim.conf deny statements) that helps me track mails that get greylisted, rejected due to ClamAV or rejected due to being listed in a DNSRBL. Tailor as appropriate for your setup:

LoadPlugin tail

<Plugin tail>
    <File "/var/log/exim4/mainlog">
        Instance "exim"
        Interval 60
        <Match>
            Regex "S=([1-9][0-9]*)"
            DSType "CounterAdd"
            Type "ipt_bytes"
            Instance "total"
        </Match>
        <Match>
            Regex "<="
            DSType "CounterInc"
            Type "mail_count"
            Instance "incoming"
        </Match>
        <Match>
            Regex "=>"
            DSType "CounterInc"
            Type "mail_count"
            Instance "outgoing"
        </Match>
        <Match>
            Regex "=="
            DSType "CounterInc"
            Type "mail_count"
            Instance "defer"
        </Match>
        <Match>
            Regex ": greylisted.$"
            DSType "CounterInc"
            Type "mail_count"
            Instance "greylisted"
        </Match>
        <Match>
            Regex "rejected after DATA: Malware:"
            DSType "CounterInc"
            Type "mail_count"
            Instance "malware"
        </Match>
        <Match>
            Regex "> rejected RCPT <.* is listed at"
            DSType "CounterInc"
            Type "mail_count"
            Instance "dnsrbl"
        </Match>
    </File>
</Plugin>

Finally, because my mail servers are low volume these days, I added a scaling filter to give me emails/minute rather than emails/second. This went in /etc/collectd/collectd.conf.d/filters.conf:

PreCacheChain "PreCache"
LoadPlugin match_regex
LoadPlugin target_scale

<Chain "PreCache">
    <Rule>
        <Match "regex">
            Plugin "^tail$"
            PluginInstance "^exim$"
            Type "^mail_count$"
            Invert false
        </Match>
        <Target "scale">
            Factor 60
        </Target>
    </Rule>
</Chain>

Update: Some examples…

Total email bytes Incoming email count Outgoing email count Deferred email count Emails rejected due to DNSRBL Greylisted email count Emails rejected due to malware

First impressions of the Gemini PDA

Mar 19, 2018 / 0 comments

Gemini PDA

Last March I discovered the IndieGoGo campaign for the Gemini PDA, a plan to produce a modern PDA with a decent keyboard inspired by the Psion 5. At that point in time the estimated delivery date was November 2017, and it wasn’t clear they were going to meet their goals. As someone has owned a variety of phones with keyboards, from a Nokia 9000i to a T-Mobile G1 I’ve been disappointed about the lack of mobile devices with keyboards. The Gemini seemed like a potential option, so I backed it, paying a total of $369 including delivery. And then I waited. And waited. And waited.

Finally, one year and a day after I backed the project, I received my Gemini PDA. Now, I don’t get as much use out of such a device as I would have in the past. The Gemini is definitely not a primary phone replacement. It’s not much bigger than my aging Honor 7 but there’s no external display to indicate who’s calling and it’s a bit clunky to have to open it to dial (I don’t trust Google Assistant to cope with my accent enough to have it ring random people). The 9000i did this well with an external keypad and LCD screen, but then it was a brick so it had the real estate to do such things. Anyway. I have a laptop at home, a laptop at work and I cycle between the 2. So I’m mostly either in close proximity to something portable enough to move around the building, or travelling in a way that doesn’t mean I could use one.

My first opportunity to actually use the Gemini in anger therefore came last Friday, when I attended BelFOSS. I’d normally bring a laptop to a conference, but instead I decided to just bring the Gemini (in addition to my normal phone). I have the LTE version, so I put my FreedomPop SIM into it - this did limit the amount I could do with it due to the low data cap, but for a single day was plenty for SSH, email + web use. I already have the Pro version of the excellent JuiceSSH, am a happy user of K-9 Mail and tend to use Chrome these days as well. All 3 were obviously perfectly happy on the Android 7.1.1 install.

Aside: Why am I not running Debian on the device? Planet do have an image available form their Linux Support page, but it’s running on top of the crufty 3.18 Android kernel and isn’t yet a first class citizen - it’s not clear the LTE will work outside Android easily and I’ve no hope of ARM opening up the Mali-T880 drivers. I’ve got plans to play around with improving the support, but for the moment I want to actually use the device a bit until I find sufficient time to be able to make progress.

So how did the day go? On the whole, a success. Battery life was great - I’d brought a USB battery pack expecting to need to boost the charge at some point, but I last charged it on Thursday night and at the time of writing it’s still claiming 25% battery left. LTE worked just fine; I had a 4G signal for most of the day with occasional drops down to 3G but no noticeable issues. The keyboard worked just fine; much better than my usual combo of a Nexus 7 + foldable Bluetooth keyboard. Some of the symbols aren’t where you’d expect, but that’s understandable on a scaled down keyboard. Screen resolution is great. I haven’t used the USB-C ports other than to charge and backup so far, but I like the fact there are 2 provided (even if you need a custom cable to get HDMI rather than it following the proper standard). The device feels nice and solid in your hand - the case is mostly metal plates that remove to give access to the SIM slot and (non-removable but user replaceable) battery. The hinge mechanism seems robust; I haven’t been worried about breaking it at any point since I got the device.

What about problems? I can’t deny there are a few. I ended up with a Mediatek X25 instead of an X27 - that matches what was initial promised, but there had been claims of an upgrade. Unfortunately issues at the factory meant that the initial production run got the older CPU. Later backers are supposed to get the upgrade. As someone who took the early risk this does leave a slightly bitter taste but I doubt I’ll actually notice any significant performance difference. The keys on the keyboard are a little lop sided in places. This seems to be just a cosmetic thing and I haven’t noticed any issues in typing. The lack of first class Debian support is disappointing, but I believe will be resolved in time (by the community if not Planet). The camera isn’t as good as my phone, but then it’s a front facing webcam style thing and it’s at least as good as my laptop at that.

Bottom line: Would I buy it again? At $369, absolutely. At the current $599? Probably not - I’m simply not on the move enough to need this on a regular basis, so I’d find it hard to justify. Maybe the 2nd gen, assuming it gets a bit more polish on the execution and proper mainline Linux support. Don’t get me wrong, I think the 1st gen is lovely and I’ve had lots of envious people admiring it, I just think it’s ended up priced a bit high for what it is. For the same money I’d be tempted by the GPD Pocket instead.

Getting Debian booting on a Lenovo Yoga 720

Feb 21, 2018 / 0 comments

I recently got a new work laptop, a 13” Yoga 720. It proved difficult to install Debian on; pressing F12 would get a boot menu allowing me to select a USB stick I have EFI GRUB on, but after GRUB loaded the kernel and the initrd it would just sit there never outputting anything else that indicated the kernel was even starting. I found instructions about Ubuntu 17.10 which helped but weren’t the complete picture. What seems to be the situation is that the kernel won’t happily boot if “Legacy Support” is not enabled - enabling this (and still booting as EFI) results in a happier experience. However in order to be able to enable legacy boot you have to switch the SATA controller from RAID to AHCI, which can cause Windows to get unhappy about its boot device going away unless you warn it first.

  • Fire up an admin shell in Windows (right click on the start menu)
  • bcdedit /set safeboot minimal
  • Reboot into the BIOS
  • Change the SATA Controller mode from RAID to AHCI (dire warnings about “All data will be erased”. It’s not true, but you’ve back up first, right?) Set “Boot Mode” to “Legacy Support”.
  • Save changes and let Windows boot to Safe Mode
  • Fire up an admin shell in Windows (right click on the start menu again)
  • bcdedit /deletevalue safeboot
  • Reboot again and Windows will load in normal mode with the AHCI drivers

Additionally I had problems getting the GRUB entry added to the BIOS; efibootmgr shows it fine but it never appears in the BIOS boot list. I ended up using Windows to add it as the primary boot option using the following (<guid> gets replaced with whatever the new “Debian” section guid is):

bcdedit /enum firmware
bcdedit /copy "{bootmgr}" /d "Debian"
bcdedit /set "{<guid>}" path \EFI\Debian\grubx64.efi
bcdedit /set "{fwbootmgr}" displayorder "{<guid>}" /addfirst

Even with that at one point the BIOS managed to “forget” about the GRUB entry and require me to re-do the final “displayorder” command.

Once you actually have the thing installed and booting it seems fine - I’m running Buster due to the fact it’s a Skylake machine with lots of bits that seem to want a newer kernel, but claimed battery life is impressive, the screen is very shiny (though sometimes a little too shiny and reflective) and the NVMe SSD seems pretty nippy as you’d expect.

subscribe via RSS