First steps with the ATtiny45

1 port USB Relay

These days the phrase “embedded” usually means no console (except, if you’re lucky, console on a UART for debugging) and probably busybox for as much of userspace as you can get away with. You possibly have package management from OpenEmbedded or similar, though it might just be a horrible kludged together rootfs if someone hates you. Either way it’s rare for it not to involve some sort of hardware and OS much more advanced than the 8 bit machines I started out programming on.

That is, unless you’re playing with Arduinos or other similar hardware. I’m currently waiting on some ESP8266 dev boards to arrive, but even they’re quite advanced, with wifi and a basic OS framework provided. A long time ago I meant to get around to playing with PICs but never managed to do so. What I realised recently was that I have a ready made USB relay board that is powered by an ATtiny45. First step was to figure out if there were suitable programming pins available, which turned out to be all brought out conveniently to the edge of the board. Next I got out my trusty Bus Pirate, installed avrdude and lo and behold:

$ avrdude -p attiny45 -c buspirate -P /dev/ttyUSB0
Attempting to initiate BusPirate binary mode...
avrdude: Paged flash write enabled.
avrdude: AVR device initialized and ready to accept instructions

Reading | ################################################## | 100% 0.01s

avrdude: Device signature = 0x1e9206 (probably t45)

avrdude: safemode: Fuses OK (E:FF, H:DD, L:E1)

avrdude done.  Thank you.

Perfect. I then read the existing flash image off the device, disassembled it, worked out it was based on V-USB and then proceeded to work out that the only interesting extra bit was that the relay was hanging off pin 3 on IO port B. Which led to me knocking up what I thought should be a functionally equivalent version of the firmware, available locally or on GitHub. It’s worked with my basic testing so far and has confirmed to me I understand how the board is set up, meaning I can start to think about what else I could do with it…

Notes on Kodi + IR remotes

This post is largely to remind myself of the details next time I hit something similar; I found bits of relevant information all over the place, but not in one single location.

I love Kodi. These days the Debian packages give me a nice out of the box experience that is easy to use. The problem comes in dealing with remote controls and making best use of the available buttons. In particular I want to upgrade the VDR setup my parents have to a more modern machine that’s capable of running Kodi. In this instance an AMD E350 nettop, which isn’t recent but does have sufficient hardware acceleration of video decoding to do the job. Plus it has a built in fintek CIR setup.

First step was finding a decent remote. The fintek is a proper IR receiver supported by the in-kernel decoding options, so I had a lot of flexibility. As it happened I ended up with a surplus to requirements Virgin V Box HD remote (URC174000-04R01). This has the advantage of looking exactly like a STB remote, because it is one.

Pointed it at the box, saw that the fintek_cir module was already installed and fired up irrecord. Failed to get it to actually record properly. Googled lots. Found ir-keytable. Fired up ir-keytable -t and managed to get sensible output with the RC-5 decoder. Used irrecord -l to get a list of valid button names and proceed to construct a vboxhd file which I dropped in /etc/rc_keymaps/. I then added a

fintek-cir * vboxhd

line to /etc/rc_maps.cfg to force my new keymap to be loaded on boot.

That got my remote working, but then came the issue of dealing with the fact that some keys worked fine in Kodi and others didn’t. This seems to be an issue with scancodes above 0xff. I could have remapped the remote not to use any of these, but instead I went down the inputlirc approach (which is already in use on the existing VDR box).

For this I needed a stable device file to point it at; the /dev/input/eventN file wasn’t stable and as a platform device it didn’t end up with a useful entry in /dev/input/by-id. A ‘quick’

udevadm info -a -p $(udevadm info -q path -n /dev/input/eventN)

provided me with the PNP id (FIT0002) allowing me to create /etc/udev/rules.d/70-remote-control.rules containing

KERNEL=="event*",ATTRS{id}=="FIT0002",SYMLINK="input/remote"

Bingo, a /dev/input/remote symlink. /etc/defaults/inputlirc ended up containing:

EVENTS="/dev/input/remote"
OPTIONS="-g -m 0"

The options tell it to grab the device for its own exclusive use, and to take all scancodes rather than letting the keyboard ones through to the normal keyboard layer. I didn’t want anything other than things specifically configured to use the remote to get the key presses.

At this point Kodi refused to actually do anything with the key presses. Looking at ~kodi/.kodi/temp/kodi.log I could see them getting seen, but not understood. Further searching led me to construct an Lircmap.xml - in particular the piece I needed was the <remote device="/dev/input/remote"> bit. The existing /usr/share/kodi/system/Lircmap.xml provided a good starting point for what I wanted and I dropped my generated file in ~kodi/.kodi/userdata/.

(Sadly it turns out I got lucky with the remote; it seems to be using the RC-5x variant which was broken in 3.17; works fine with the 3.16 kernel in Debian 8 (jessie) but nothing later. I’ve narrowed down the offending commit and raised #117221.)

Helpful pages included:

Going to DebConf 16

Going to DebConf16

Whoop! Looking forward to it already (though will probably spend it feeling I should be finishing my dissertation).

Outbound:

2016-07-01 15:20 DUB -> 16:45 LHR BA0837
2016-07-01 21:35 LHR -> 10:00 CPT BA0059

Inbound:

2016-07-10 19:20 CPT -> 06:15 LHR BA0058
2016-07-11 09:20 LHR -> 10:45 DUB BA0828

(image stolen from Gunnar)

Software in the Public Interest contributing members: Check your activity status!

That’s a longer title than I’d like, but I want to try and catch the attention of anyone who might have missed more directed notifications about this. If you’re not an SPI contributing member there’s probably nothing to see here…

Although I decided not to stand for re-election at the Software in the Public Interest (SPI) board elections last July, I haven’t stopped my involvement with the organisation. In particular I’ve spent some time working on an overhaul of the members website and rolling it out. One of the things this has enabled is implementation of 2009-11-04.jmd.1: Contributing membership expiry, by tracking activity in elections and providing an easy way for a member to indicate they consider themselves active even if they haven’t voted.

The plan is that this will run at some point after the completion of every board election. A first pass of cleanups was completed nearly a month ago, contacting all contributing members who’d never been seen to vote and asking them to update their status if they were still active. A second round, of people who didn’t vote in the last board election (in 2014), is currently under way. Affected members will have been emailed directly and there was a mail to spi-announce, but I’m aware people often overlook these things or filter mail off somewhere that doesn’t get read often.

If you are an SPI Contributing member who considers themselves an active member I strongly recommend you login to the SPI Members Website and check the “Last active” date displayed is after 2014-07-14 (i.e. post the start of the last board election). If it’s not, click on the “Update” link beside the date. The updated date will be shown once you’ve done so.

Why does pruning inactive members matter? The 2015 X.Org election results provide at least one indication of why ensuring you have an engaged membership is important - they failed to make a by-laws change that a vast majority of votes were in favour of, due to failing to make quorum. (If you’re an X.org member, go vote!)

Dr Stoll: Or how I learned to stop worrying and love the GPL

[I wrote this as part of BelFOSS but I think it’s worth posting here.]

My Free Software journey starts with The Cuckoo’s Egg. Back in the early 90s a family friend suggested I might enjoy reading it. He was right; I was fascinated by the world of interconnected machines it introduced me to. That helped start my involvement in FidoNet, but it also got me interested in Unix. So when I saw a Linux book at the Queen’s University bookshop (sadly no longer with us) with a Slackware CD in the back I had to have it.

The motivation at this point was to have a low cost version of Unix I could run on the PC hardware I already owned. I had no knowledge of the GNU Project before this point, and as I wasn’t a C programmer I had no interest in looking at the source code. I spent some time futzing around with it and that partition (I was dual booting with DOS 6.22) fell into disuse. It wasn’t until I’d learnt some C and turned up to university, which provided me with an internet connection and others who were either already using Linux or interested in doing so, that I started running a Linux box full time.

Once I was doing that I became a lot more interested in the Open Source side of the equation. Rather than running a closed operating system that even the API for wasn’t properly specified (or I wouldn’t have needed my copy of Undocumented DOS) I had the complete source to both the underlying OS and all the utilities that it was using. For someone doing a computer science degree this was invaluable. Minix may have been the OS discussed in the OS Design module I studied, but Linux was a much more feature complete option that I was running on my desktop and could also peer under the hood of.

In my professional career I’ve always welcomed the opportunities to work with Open Source. A long time ago I experienced a particularly annoying issue when writing a device driver under QNX. The documentation didn’t seem to match the observed behaviour of the subsystem I was interfacing with. However due to licensing issues only a small number of people in the organisation were able to actually look at the QNX source. So I ended up wasting a much more senior engineer’s time with queries like “I think it’s actually doing x, y and z instead of a, b and c; can you confirm?”. Instances where I can look directly at the source code myself make me much more productive.

Commercial development also started to make me more understanding of the Free Software nature of the code I was running. It wasn’t just the ability to look at the code which was useful, but also the fact there was no need to reinvent the wheel. Need a base OS to build an appliance on? Debian ensures that the main component is Free for all usage. No need to worry about rolling your own compilers, base libraries etc. From a commercial perspective that allows you to concentrate on the actual product. And when you hit problems, the source is available and you can potentially fix it yourself or at least more easily find out if there’s been a fix for that issue released (being able to see code development in version control systems rather than getting a new upstream release with a whole heap on unrelated fixes in it really helps with that).

I had thus progressed from using FLOSS because it was free-as-in-beer, to appreciating the benefits of Open Source in my own learning and employment experiences, to a deeper understanding of the free-as-in-speech benefits that could be gained. However at this point I was still thinking very much from a developer mindset. Even my thoughts about how users can benefit from Free Software were in the context of businesses being able to easily switch suppliers or continue to maintain legacy software because they had the source to their systems available.

One of the major factors that has helped me to see beyond this is the expansion of the Internet of Things (IoT). With desktop or server software there is by and large a choice about what to use. This is not the case with appliances. While manufacturers will often produce a few revisions of software for their devices, usually eventually there is a newer and shiny model and the old one is abandoned. This is problematic for many reasons. For example, historically TVs have been long lived devices (I had one I bought second hand that happily lasted me 7+ years). However the “smart” capabilities of the TV I purchased in 2012 are already of limited usefulness, and LG have moved on to their current models. I have no intention of replacing the device any time soon, so have had to accept it is largely acting as a dumb display. More serious is the lack of security updates. For a TV that doesn’t require a network connection to function this is not as important, but the IoT is a trickier proposition. For example Matthew Garrett had an awful experience with some ‘intelligent’ light bulbs, which effectively circumvented any home network security you might have set up. The manufacturer’s defence? No longer manufactured or supported.

It’s cases like these that have slowly led me to a more complete understanding of the freedom that Free Software truly offers to users. It’s not just about cost free/low cost software. It’s not just about being able to learn from looking at the source to the programs you are running. It’s not even about the freedom to be able to modify the programs that we use. It’s about giving users true Freedom to use and modify their devices as they see fit. From this viewpoint it is much easier to understand the protections against Tivoization that were introduced with GPLv3, and better appreciate the argument sometimes made that the GPL offers more freedom than BSD style licenses.

subscribe via RSS