(Badly) cloning a TEMPer USB

Jul 31, 2018 / 0 comments

Digispark/DS18B20

Having setup a central MQTT broker I’ve wanted to feed it extra data. The study temperature was a start, but not the most useful piece of data when working towards controlling the central heating. As it happens I have a machine in the living room hooked up to the TV, so I thought about buying something like a TEMPer USB so I could sample the room temperature and add it as a data source. And then I realised that I still had a bunch of Digispark clones and some Maxim DS18B20 1-Wire temperature sensors and I should build something instead.

I decided to try and emulate the TEMPer device rather than doing something unique. V-USB was pressed into service and some furious Googling took place to try and find out the details of how the TEMPer appears to the host in order to craft the appropriate USB/HID descriptors to present - actually finding some lsusb output was the hardest part. Looking at the code of various tools designed to talk to the device provided details of the different init commands that needed to be recognised and a basic skeleton framework (reporting a constant 15°C temperature) was crafted. Once that was working with the existing client code knocking up some 1-Wire code to query the DS18B20 wasn’t too much effort (I seem to keep implementing this code on various devices).

At this point things became less reliable. The V-USB code is an evil (and very clever) set of interrupt driven GPIO bit banging routines, working around the fact that the ATTiny doesn’t have a USB port. 1-Wire is a timed protocol, so the simple implementation involves a bunch of delays. To add to this the temper-python library decides to do a USB device reset if it sees a timeout. And does a double read to work around some behaviour of the real hardware. Doing a 1-Wire transaction directly in response to these requests causes lots of problems, so I implemented a timer to do a 1-Wire temperature check once every 10 seconds, and then the request from the host just returns the last value read. This is a lot more reliable, but still sees a few resets a day. It would be nice to fix this, but for the moment it’s good enough for my needs - I’m reading temperature once a minute to report back to the MQTT server, but it offends me to see the USB resets in the kernel log.

Additionally I had some problems with accuracy. Firstly it seems the batch of DS18B20s I have can vary by 1-2°C, so I ended up adjusting for this in the code that runs on the host. Secondly I mounted the DS18B20 on the Digispark board, as in the picture. The USB cable ensures it’s far enough away from the host (rather than sitting plugged directly into the back of the machine and measuring the PSU fan output temperature), but the LED on the board turned out to be close enough that it affected the reading. I have no need for it so I just ended up removing it.

The code is locally and on GitHub in case it’s of use/interest to anyone else.

(I’m currently at DebConf18 but I’ll wait until it’s over before I write it up, and I’ve been meaning to blog about this for a while anyway.)

Fixing a broken ESP8266

Jul 8, 2018 / 0 comments

One of the IoT platforms I’ve been playing with is the ESP8266, which is a pretty incredible little chip with dev boards available for under £4. Arduino and Micropython are both great development platforms for them, but the first board I bought (back in 2016) only had a 4Mbit flash chip. As a result I spent some time writing against the Espressif C SDK and trying to fit everything into less than 256KB so that the flash could hold 2 images and allow over the air updates. Annoyingly just as I was getting to the point of success with Richard Burton’s rBoot my device started misbehaving, even when I went back to the default boot loader:

 ets Jan  8 2013,rst cause:1, boot mode:(3,6)

load 0x40100000, len 816, room 16
tail 0
chksum 0x8d
load 0x3ffe8000, len 788, room 8
tail 12
chksum 0xcf
ho 0 tail 12 room 4
load 0x3ffe8314, len 288, room 12
tail 4
chksum 0xcf
csum 0xcf

2nd boot version : 1.2
  SPI Speed      : 40MHz
  SPI Mode       : DIO
  SPI Flash Size : 4Mbit
jump to run user1

Fatal exception (0):
epc1=0x402015a4, epc2=0x00000000, epc3=0x00000000, excvaddr=0x00000000, depc=0x00000000
Fatal exception (0):
epc1=0x402015a4, epc2=0x00000000, epc3=0x00000000, excvaddr=0x00000000, depc=0x00000000
Fatal exception (0):

(repeats indefinitely)

Various things suggested this was a bad flash. I tried a clean Micropython install, a restore of the original AT firmware backup I’d taken, and lots of different combinations of my own code/the blinkenlights demo and rBoot/Espressif’s bootloader. I made sure my 3.3v supply had enough oompf (I’d previously been cheating and using the built in FT232RL regulator, which doesn’t have quite enough when the device is fully operational, rather than in UART boot mode, such as doing an OTA flash). No joy. I gave up and moved on to one of the other ESP8266 modules I had, with a greater amount of flash. However I was curious about whether this was simply a case of the flash chip wearing out (various sites claim the cheap ones on some dev boards will die after a fairly small number of programming cycles). So I ordered some 16Mb devices - cheap enough to make it worth trying out, but also giving a useful bump in space.

They arrived this week and I set about removing the old chip and soldering on the new one (Andreas Spiess has a useful video of this, or there’s Pete Scargill’s write up). Powered it all up, ran esptool.py flash_id to see that it was correctly detected as a 16Mb/2MB device and set about flashing my app onto it. Only to get:

 ets Jan  8 2013,rst cause:2, boot mode:(3,3)

load 0x40100000, len 612, room 16
tail 4
chksum 0xfd
load 0x88380000, len 565951362, room 4
flash read err, ets_unpack_flash_code
ets_main.c

Ooops. I had better luck with a complete flash erase (esptool.py erase_flash) and then a full program of Micropython using esptool.py --baud 460800 write_flash --flash_size=detect -fm dio 0 esp8266-20180511-v1.9.4.bin, which at least convinced me I’d managed to solder the new chip on correctly. Further experimention revealed I needed to pass all of the flash parameters to esptool.py to get rBoot entirely happy, and include esp_init_data_default.bin (FWIW I updated everything to v2.2.1 as part of the process):

esptool.py write_flash --flash_size=16m -fm dio 0x0 rboot.bin 0x2000 rom0.bin \
    0x120000 rom1.bin 0x1fc000 esp_init_data_default_v08.bin

Which gives (at the default 76200 of the bootloader bit):

 ets Jan  8 2013,rst cause:1, boot mode:(3,7)

load 0x40100000, len 1328, room 16
tail 0
chksum 0x12
load 0x3ffe8000, len 604, room 8
tail 4
chksum 0x34
csum 0x34

rBoot v1.4.2 - richardaburton@gmail.com
Flash Size:   16 Mbit
Flash Mode:   DIO
Flash Speed:  40 MHz

Booting rom 0.
rf cal sector: 507
freq trace enable 0
rf[112]

Given the cost of the modules it wasn’t really worth my time and energy to actually fix the broken one rather than buying a new one, but it was rewarding to be sure of the root cause. Hopefully this post at least serves to help anyone seeing the same exception messages determine that there’s a good chance their flash has died, and that a replacement may sort the problem.

Thoughts on the acquisition of GitHub by Microsoft

Jun 28, 2018 / 0 comments

Back at the start of 2010, I attended linux.conf.au in Wellington. One of the events I attended was sponsored by GitHub, who bought me beer in a fine Wellington bar (that was very proud of having an almost complete collection of BrewDog beers, including some Tactical Nuclear Penguin). I proceeded to tell them that I really didn’t understand their business model and that one of the great things about git was the very fact it was decentralised and we didn’t need to host things in one place any more. I don’t think they were offended, and the announcement Microsoft are acquiring GitHub for $7.5 billion proves that they had a much better idea about this stuff than me.

The acquisition announcement seems to have caused an exodus. GitLab reported over 13,000 projects being migrated in a single hour. IRC and Twitter were full of people throwing up their hands and saying it was terrible. Why is this? The fear factor seemed to come from was who was doing the acquiring. Microsoft. The big, bad Linux is a cancer folk. I saw a similar, though more muted, reaction when LinkedIn were acquired.

This extremely negative reaction to Microsoft seems bizarre to me these days. I’m well aware of their past, and their anti-competitive practises (dating back to MS-DOS vs DR-DOS). I’ve no doubt their current embrace of Free Software is ultimately driven by business decisions rather than a sudden fit of altruism. But I do think their current behaviour is something we could never have foreseen 15+ years ago. Did you ever think Microsoft would be a contributor to the Linux kernel? Is it fair to maintain such animosity? Not for me to say, I guess, but I think that some of it is that both GitHub and LinkedIn were services that people were already uneasy about using, and the acquisition was the straw that broke the camel’s back.

What are the issues with GitHub? I previously wrote about the GitHub TOS changes, stating I didn’t think it was necessary to fear the TOS changes, but that the centralised nature of the service was potentially something to be wary of. joeyh talked about this as long ago as 2011, discussing the aspects of the service other than the source code hosting that were only API accessible, or in some other way more restricted than a git clone away. It’s fair criticism; the extra features offered by GitHub are very much tied to their service. And yet I don’t recall the same complaints about SourceForge, long the home of choice for Free Software projects. Its problems seem to be more around a dated interface, being slow to enable distributed VCSes and the addition of advertising. People left because there were much better options, not because of idiological differences.

Let’s look at the advantages GitHub had (and still has) to offer. I held off on setting up a GitHub account for a long time. I didn’t see the need; I self-hosted my Git repositories. I had the ability to setup mailing lists if I needed them (and my projects generally aren’t popular enough that they did). But I succumbed in 2015. Why? I think it was probably as part of helping to run an OpenHatch workshop, trying to get people involved in Free software. That may sound ironic, but helping out with those workshops helped show me the benefit of the workflow GitHub offers. The whole fork / branch / work / submit a pull request approach really helps lower the barrier to entry for people getting started out. Suddenly fixing an annoying spelling mistake isn’t a huge thing; it’s easy to work in your own private playground and then make that work available to upstream and to anyone else who might be interested.

For small projects without active mailing lists that’s huge. Even for big projects that can be a huge win. And it’s not just useful to new contributors. It lowers the barrier for me to be a patch ‘n run contributor. Now that’s not necessarily appealing to some projects, because they’d rather get community involvement. And I get that, but I just don’t have the time to be active in all the projects I feel I can offer something to. Part of that ease is the power of git, the fact that a clone is a first class repo, capable of standing alone or being merged back into the parent. But another part is the interface GitHub created, and they should get some credit for that. It’s one of those things that once you’re presented with it it makes sense, but no one had done it quite as slickly up to that point. Submissions via mailing lists are much more likely to get lost in the archives compared to being able to see a list of all outstanding pull requests on GitHub, and the associated discussion. And subscribe only to that discussion rather than everything.

GitHub also seemed to appear at the right time. It, like SourceForge, enabled easy discovery of projects. Crucially it did this at a point when web frameworks were taking off and a whole range of developers who had not previously pull large chunks of code from other projects were suddenly doing so. And writing frameworks or plugins themselves and feeling in the mood to share them. GitHub has somehow managed to hit critical mass such that lots of code that I’m sure would have otherwise never seen the light of day are available to all. Perhaps the key was that repos were lightweight setups under usernames, unlike the heavier SourceForge approach of needing a complete project setup per codebase you wanted to push. Although it’s not my primary platform I engage with GitHub for my own code because the barrier is low; it’s couple of clicks on the website and then I just push to it like my other remote repos.

I seem to be coming across as a bit of a GitHub apologist here, which isn’t my intention. I just think the knee-jerk anti GitHub reaction has been fascinating to observe. I signed up to GitLab around the same time as GitHub, but I’m not under any illusions that their hosted service is significantly different from GitHub in terms of having my data hosted by a third party. Nothing that’s up on either site is only up there, and everything that is is publicly available anyway. I understand that as third parties they can change my access at any point in time, and so I haven’t built any infrastructure that assumes their continued existence. That said, why would I not take advantage of their facilities when they happen to be of use to me?

I don’t expect my use of GitHub to significantly change now they’ve been acquired.

Hooking up Home Assistant to Alexa + Google Assistant

Jun 12, 2018 / 0 comments

I have an Echo Dot. Actually I have two; one in my study and one in the dining room. Mostly we yell at Alexa to play us music; occasionally I ask her to set a timer, tell me what time it is or tell me the news. Having setup Home Assistant it seemed reasonable to try and enable control of the light in the dining room via Alexa.

Perversely I started with Google Assistant, even though I only have access to it via my phone. Why? Because the setup process was a lot easier. There are a bunch of hoops to jump through that are documented on the Google Assistant component page, but essentially you create a new home automation component in the Actions on Google interface, connect it with the Google OAuth stuff for account linking, and open up your Home Assistant instance to the big bad internet so Google can connect.

This final step is where I differed from the provided setup. My instance is accessible internally at home, but I haven’t wanted to expose it externally yet (and I suspect I never well, but instead have the ability to VPN back in to access or similar). The default instructions need you to open up API access publicly, and configure up Google with your API password, which allows access to everything. I’d rather not.

So, firstly I configured up my external host with an Apache instance and a Let’s Encrypt cert (luckily I have a static IP, so this was actually the base host that the Home Assistant container runs on). Rather than using this to proxy the entire Home Assistant setup I created a unique /external/google/randomstring proxy just for the Google Assistant API endpoint. It looks a bit like this:

<VirtualHost *:443>
  ServerName my.external.host

  ProxyPreserveHost On
  ProxyRequests off

  RewriteEngine on

  # External access for Google Assistant
  ProxyPassReverse /external/google/randomstring http://hass-host:8123/api/google_assistant
  RewriteRule ^/external/google/randomstring$ http://hass-host:8123/api/google_assistant?api_password=myapipassword [P]
  RewriteRule ^/external/google/randomstring/auth$ http://hass-host:8123/api/google_assistant/auth?%{QUERY_STRING}&&api_password=myapipassword [P]

  SSLEngine on
  SSLCertificateFile /etc/ssl/my.external.host.crt
  SSLCertificateKeyFile /etc/ssl/private/my.external.host.key
  SSLCertificateChainFile /etc/ssl/lets-encrypt-x3-cross-signed.crt
</VirtualHost>

This locks down the external access to just being the Google Assistant end point, and means that Google have a specific shared secret rather than the full API password. I needed to configure up Home Assistant as well, so configuration.yaml gained:

google_assistant:
  project_id: homeautomation-8fdab
  client_id: oFqHKdawWAOkeiy13rtr5BBstIzN1B7DLhCPok1a6Jtp7rOI2KQwRLZUxSg00rIEib2NG8rWZpH1cW6N
  access_token: l2FrtQyyiJGo8uxPio0hE5KE9ZElAw7JGcWRiWUZYwBhLUpH3VH8cJBk4Ct3OzLwN1Fnw39SR9YArfKq
  agent_user_id: noodles@earth.li
  api_key: nyAxuFoLcqNIFNXexwe7nfjTu2jmeBbAP8mWvNea
  exposed_domains:
    - light

Setting up Alexa access is more complicated. Amazon Smart Home skills must call an AWS Lambda - the code that services the request is essential a small service run within Lambda. Home Assistant supports all the appropriate requests, so the Lambda code is a very simple proxy these days. I used Haaska which has a complete setup guide. You must do all 3 steps - the OAuth provider, the AWS Lambda and the Alexa Skill. Again, I wanted to avoid exposing the full API or the API password, so I forked Haaska to remove the use of a password and instead use a custom URL. I then added the following additional lines to the Apache config above:

# External access for Amazon Alexa
ProxyPassReverse /external/amazon/stringrandom http://hass-host:8123/api/alexa/smart_home
RewriteRule /external/amazon/stringrandom http://hass-host:8123/api/alexa/smart_home?api_password=myapipassword [P]

In the config.json I left the password field blank and set url to https://my.external.host/external/amazon/stringrandom. configuration.yaml required less configuration than the Google equivalent:

alexa:
  smart_home:
    filter:
      include_entities:
        - light.dining_room_lights
        - light.living_room_lights
        - light.kitchen
        - light.snug

(I’ve added a few more lights, but more on the exact hardware details of those at another point.)

To enable in Alexa I went to the app on my phone, selected the “Smart Home” menu option, enabled my Home Assistant skill and was able to search for the available devices. I can then yell “Alexa, turn on the snug” and magically the light turns on.

Aside from being more useful (due to the use of the Dot rather than pulling out a phone) the Alexa interface is a bit smoother - the command detection is more reliable (possibly due to the more limited range of options it has to work out?) and adding new devices is a simple rescan. Adding new devices with Google Assistant seems to require unlinking and relinking the whole setup.

The only problem with this setup so far is that it’s only really useful for the room with the Alexa in it. Shouting from the living room in the hope the Dot will hear is a bit hit and miss, and I haven’t yet figured out a good alternative method for controlling the lights there that doesn’t mean using a phone or a tablet device.

Getting started with Home Assistant

Jun 5, 2018 / 0 comments

Having set up some MQTT sensors and controllable lights the next step was to start tying things together with a nicer interface than mosquitto_pub and mosquitto_sub. I don’t yet have enough devices setup to be able to do some useful scripting (turning on the snug light when the study is cold is not helpful), but a web control interface makes things easier to work with as well as providing a suitable platform for expansion as I add devices.

There are various home automation projects out there to help with this. I’d previously poked openHAB and found it quite complex, and I saw reference to Domoticz which looked viable, but in the end I settled on Home Assistant, which is written in Python and has a good range of integrations available out of the box.

I shoved the install into a systemd-nspawn container (I have an Ansible setup which makes spinning one of these up with a basic Debian install simple, and it makes it easy to cleanly tear things down as well). One downside of Home Assistant is that it decides it’s going to install various Python modules once you actually configure up some of its integrations. This makes me a little uncomfortable, but I set it up with its own virtualenv to make it easy to see what had been pulled in. Additionally I separated out the logs, config and state database, all of which normally go in ~/.homeassistant/. My systemd service file went in /etc/systemd/system/home-assistant.service and looks like:

[Unit]
Description=Home Assistant
After=network-online.target

[Service]
Type=simple
User=hass
ExecStart=/srv/hass/bin/hass -c /etc/homeassistant --log-file /var/log/homeassistant/homeassistant.log

MemoryDenyWriteExecute=true
ProtectControlGroups=true
PrivateDevices=true
ProtectKernelTunables=true
ProtectSystem=true
RestrictRealtime=true
RestrictNamespaces=true

[Install]
WantedBy=multi-user.target

Moving the state database needs an edit to /etc/homeassistant/configuration.yaml (a default will be created on first startup, I’ll only mention the changes I made here):

recorder:
  db_url: sqlite:///var/lib/homeassistant/home-assistant_v2.db

I disabled the Home Assistant cloud piece, as I’m not planning on using it:

# cloud:

And the introduction card:

# introduction:

The existing MQTT broker was easily plumbed in:

mqtt:
  broker: mqtt-host
  username: hass
  password: !secret mqtt_password
  port: 8883
  certificate: /etc/ssl/certs/ca-certificates.crt

Then the study temperature sensor (part of the existing sensor block that had weather prediction):

sensor:
  - platform: mqtt
    name: "Study Temperature"
    state_topic: "collectd/mqtt.o362.us/mqtt/temperature-study"
    value_template: "{{ value.split(':')[1] }}"
    device_class: "temperature"
    unit_of_measurement: "°C"

The templating ability let me continue to log into MQTT in a format collectd could parse, while also being able to pull the information into Home Assistant.

Finally the Sonoff controlled light:

light:
  - platform: mqtt
    name: snug
    command_topic: 'cmnd/sonoff-snug/power'

I set http_password (to prevent unauthenticated access) and mqtt_password in /etc/homeassistant/secrets.yaml. Then systemctl start home-assistant brought the system up on http://hass-host:8123/, and the default interface presented the study temperature and a control for the snug light, as well as the default indicators of whether the sun is up or not and the local weather status.

I do have a few niggles with Home Assistant:

  • Single password for access: There’s one password for accessing the API endpoint, so no ability to give different users different access or limit what an external integration can do.
  • Wants an entire subdomain: This is a common issue with webapps; they don’t want to live in a subdirectory under a main site (I also have this issue with my UniFi controller and Keybase, who don’t want to believe my main website is old skool with /~noodles/). There’s an open configurable webroot feature request, but no sign of it getting resolved. Sadly it involves work to both the backend and the frontend - I think a modicum of hacking could fix up the backend bits, but have no idea where to start with a Polymer frontend.
  • Installs its own software: I don’t like the fact the installation of Python modules isn’t an up front thing. I’d rather be able to pull a dependency file easily into Ansible and lock down the installation of new things. I can probably get around this by enabling plugins, allowing the modules to be installed and then locking down permissions but it’s kludgy and feels fragile.
  • Textual configuration: I’m not really sure I have a good solution to this, but it’s clunky to have to do all the configuration via a text file (and I love scriptable configuration). This isn’t something that’s going to work out of the box for non-technical users, and even for those of us happy hand editing YAML there’s a lot of functionality that’s hard to discover without some digging. One of my original hopes with Home Automation was to get a better central heating control and if it’s not usable by any household member it isn’t going to count as better.

Some of these are works in progress, some are down to my personal preferences. There’s active development, which is great to see, and plenty of documentation - both offical on the project website, and in the community forums. And one of the nice things about tying everything together with MQTT is that if I do decide Home Assistant isn’t the right thing down the line, I should be able to drop in anything else that can deal with an MQTT broker.

subscribe via RSS