Over Christmas I found myself playing some Civilization V - I bought it for Linux some time ago on Steam but hadn’t played it a lot. So I fired it up to play in the background (one of the advantages of turn based games). Turns out it’s quite CPU intensive (the last Civilization I played was the original, which ran under DOS so there was no doing anything else while it ran anyway), and my 6 year old Dell E7240 couldn’t cope very well with switching back and forth. It still played well enough to be enjoyable, just not in a lightweight “while I’m doing other things” manner. On top of that the battery life on the E7240 hadn’t been that great; I’d had to replace the battery in November because the original was starting to bulge significantly and while the 3rd party battery I bought had much better life it was nowhere near the capacity of the old one when it was new.
The problem is I like subnotebooks, and my E7240 was mostly maxed out (i5-4300U, 16G RAM, 500G SATA SSD). A new subnotebook would have a much better CPU, and probably an NVMe SSD instead of SATA, but I’d still have the 16G RAM cap and if I’m looking for a machine to last me another 5 years that doesn’t seem enough. Then in January there were announcements about a Dell XPS 13 that would come with 32G. Perfect, thought I. 10th Gen i7, 32G, 1920x1200 screen in 13”. I have an XPS 13 for work and I’m very happy with it.
Then, while at FOSDEM I saw an article in Phoronix about a $200 Ryzen 3 (the M141) from Walmart. It looked like it would end up similar in performance to the E7240, but with a bit better battery life and for $200 it was well worth a shot (luckily I already had a work trip to the US planned for the middle of February, and the office is near a Walmart). Unfortunately I decided to sleep on it and when I woke up the price had jumped to $279. Not quite as appealing, but looking around their site I saw a Ryzen 5 variant (the M142) going for $400. It featured a Ryzen 5 3500U, which means 4 cores (8 threads), which was a much nicer boost over my i5. Plus AMD instead of Intel removes a whole set of the speculative execution issues that are going around. So I ordered one. And then it got cancelled a couple of days later because they claimed they couldn’t take payment. So I tried again, and that seemed to work. Total cost including taxes etc was about $440 (or £350 in real money).
Base spec as shipped is Ryzen 5 3500U, 8G RAM + 256G SATA m.2 SSD. Provided OS is Windows 10 Home. I managed to get it near the start of my US trip, and I’d brought a USB stick with the Debian installer on it, so I decided to reinstall. Sadly the Buster installer didn’t work - booted fine but the hardware discovery part took ages and generally seemed unhappy. I took the easy option and grabbed the Bullseye Alpha 1 netinst image instead (I run testing on my personal laptop, so this is what I was going to end up with). That worked fine (including, impressively, just working on the hotel wifi but I think that was partly because doing the T+Cs acceptance under Windows was remembered so I didn’t have to do it again to get routed access for the installer). I did need to manually install
firmware-amd-graphics to make X happy, but everything else was smooth and I was able to use the laptop in the evenings for the rest of my trip.
The interesting thing to me about this laptop was that the RAM was easily upgradable, and there was some suggestion (though conflicting reports) that it might take 32G. It’s only got a single slot (so a single channel, which cripples things a bit especially with the built in graphics), but I found a Timetec 32G DDR4 SODIMM and ordered it to try out. It had arrived by the time I got home from the US and I eagerly installed it. Only to find the system was unreliable, so I went back to the 8G. Once I had a little more time I played again, running memtest86 (the UEFI variant) to test the RAM and hitting no problems. So I tried limiting memory to 16G (mem=16G on the Linux command line), no issues even while compiling kernels. 24G, no issues. Didn’t limit it at all, no issues (so far). So I don’t know if I missed something the first time round such as cooling issues, or if it’s something else entirely that was the issue then. The BIOS detects the 32G just fine though, so there’s obviously support there it just might be a bit picky about RAM type.
Next thing was I had hoped to transplant the drive from my old laptop across; 500G has been plenty for my laptop so I didn’t feel the need to upgrade. Except the old machine was old enough it was an mSATA drive, which wouldn’t fit. I’ve a spare Optane drive so I was hoping I’d be able to use that, but it’s a 22110 form factor and while the M142 has 2 m.2 slots they’re both only 2280. Also the Optane turned out to not be detected by the BIOS when I set it in. So I had to order a new m.2 drive and ended up with a 1T WD Blue, which has worked just fine so far.
How do I find it? It’s worked out a bit pricer overall than I hoped (about £550 once I bought the RAM and the SSD) but I’ve ended up with twice the RAM, twice the disk space and twice the cores of my old machine (for probably less than half what I paid for it). Civ V is still slower than I’d like, but it’s much happier about multitasking (and the fans don’t spin up so much). The machine is a bit bigger, but not terribly so (about 1cm wider) and for a cheap laptop it is light (I had it and my work laptop on my flight home in my hand baggage and it wasn’t uncomfortably heavy). The USB-C isn’t proper Thunderbolt, so I can’t dock with one cable, but the power/HDMI/USB3 are beside each other and easy enough to connect. Plus it’s HDMI 2.0 which means I can drive my almost-4K monitor at 60Hz without problems (my old laptop was slightly out of spec driving the monitor and would get unhappy now and then). Other than Civ V I’m not really noticing the CPU boost, but day to day I wasn’t expecting to. And while it’s mostly on my desk and mains powered powertop is estimating nearly 6 hours of battery life. Not the full working day I can get out of XPS 13, but pretty respectable for the price point. So, yeah, I’m pretty happy with the purchase. My only complaint would be the keyboard isn’t great, but I’m mostly using an external one and the internal one is fine on the move.
The Dell XPS 13 with 32GB? Still not available on Dell’s site at the time of writing. I won’t be holding my breath.
This week I fixed a bug that dated back to last May. It was in a piece of hardware I assembled, running firmware I wrote most of. And it had been in operation since May without me noticing the issue.
What was the trigger that led to me discovering the bug’s existence? The colder temperatures. See, the device in question is a Digispark/433MHz receiver/USB serial dongle combo that listens for broadcasts from a Digoo DG-R8H wireless temperature/humidity weather station monitor. This is placed outside, giving me external temperature data to feed into my home automation setup.
The thing is, while Belfast is often cold and wet, it’s rarely really cold. So up until recently the fact I never saw sub-zero temperatures reported could just be attributed to the fact the sensor is on a window sill and the house probably has enough residual heat and it’s sheltered enough that it never actually got below zero. And then there were a few days where it obviously did and that wasn’t reflected in the results and so I scratched my head and dug out the code.
It was obvious when I looked what the issue was; I made no attempt to try and deal with negative temperatures. My excuse for this is that my DS18B20 1-Wire temperature sensor code didn’t make any attempt to deal with negative temperatures either - it didn’t need to, as those are all deployed inside my home and if the temperature gets towards zero the heating is turned on. So first mistake; not thinking about the fact the external sensor was going to have a different set of requirements/limits than the internal one.
Secondly, when I looked at the code closely, it wasn’t clear to me how I’d ever been getting the right value. The temperature is a 12 bit value in the middle of a 36 bit data stream, so there’s a shift and mask to extract it for printing. I strip out 4 bits which are always one, which makes things fit nicely into a 32 bit variable. Except I failed to take that into account for the temperature shift - all the other pieces of information were correctly handled, just not the temperature.
Now when I mentioned this on IRC Lars Wirzenius helpfully piped up with something along the lines of “Of course your test suite caught this for you”. I’ve got a bunch of excuses here; part of it is about the fact that once you involve hardware doing a full end-to-end test becomes a lot harder, part of it is about the fact this is a personal project and writing the code is more fun than writing an exhaustive test suite and part of it is about the fact I obviously just never wrote the negative temperature support and I’d not have written a test for it either most probably.
Also I did actually perform some testing before putting it into service. Values looked sane when compared to it sitting inside on my desk, and it sitting outside on the window sill. It varied over time as I’d expect, with overnight temperatures being lower than during the day. I even had a weird issue where I was seeing a daily peak at around 7am which I investigated and realised was a result of that being the point where sunlight would bounce off another window and shine directly on the Digoo device.
So what additional testing did I do this time, to make sure I’d fixed the issue properly? I put the Digoo in the freezer…
Anyway, my attempt to take some generally useful lessons from this experience are as follows:
- Not implementing functionality is fine, but at least make a note (file a ticket if it’s a formalise project) of the fact. I’m a big fan of low priority feature bugs to track these things.
- Developer driven testing will miss things. A second pair of eyes / a suitably devious test mind would have thought of this sort of thing very quickly. Unit tests are great, but I’ve a big belief that QA is a distinctly different skill to development and you need both in your team.
- 20+ years of experience with bit shifting doesn’t mean you won’t mess it up in a non-obvious way.
- Testing that involves hardware can provide trickier problems than pure software testing.
(Oh, and if you really care, I fixed the DS18B20 negative temperature support for completeness.)
I wrote last year about my experience with making my first PCB using JLCPCB. I’ve now got 5 of the boards in production around my house, and another couple assembled on my desk for testing. I also did a much simpler board to mount a GPS module on my MapleBoard - basically just with a suitable DIP connector and mount point for the GPS module. At that point I ended up having to pay for shipping; not being in a hurry I went for the cheapest option which mean the total process took 2 weeks from order until it arrived. Still not bad for under $8!
Just before Christmas I discovered that JLCPCB had expanded their SMT assembly option to beyond the Chinese market, and were offering coupons off (but even without that had much, much lower assembly/setup fees than anywhere else I’d seen). Despite being part of LCSC the parts library can be a bit limited (partly it seems there’s nothing complex to assemble such as connectors), with a set of “basic” components without setup fee and then “extended” options which have a $3 setup fee (because they’re not permanently loaded, AIUI).
To test out the service I decided to revise my IoT board. First, I’ve used a few for 12V LED strip control which has meant the 3.3V LDO is working harder than ideal, so I wanted to switch (ha ha) to a buck converter. I worked back from the JLCPCB basic parts list and chose an MP2451, which had a handy data sheet with an example implementation. I also upgraded the ESP module to an ESP32-WROOM - I’ve had some issues with non-flickery PWM on the ESP8266 and the ESP32 has hardware PWM. I also have some applications the Bluetooth would be useful for. Once again I turned to KiCad to draw the schematic and lay out the board. I kept the same form factor for ease, as I knew I could get a case for it. The more complex circuitry was a bit harder to lay out in the same space, and the assembly has a limitation of being single sided which complicates things further, but the fact it was being done for me meant I could drop to 0603 parts.
All-in-all I ended up with 17 parts for the board assembly, with the ESP32 module and power connectors being my responsibility (JLCPCB only have the basic ESP32 chip and I did not feel like trying to design a PCB antenna). I managed to get everything except the inductor from the basic library, which kept costs down. Total cost for 10 boards, parts, assembly, shipping + customs fees was just under $29 which is amazing value to me. What’s even better is that the DFM (design for manufacturing) checks they did realised I’d placed the MP2451 the wrong way round and they automatically rotated it 180° for me. Phew!
The order was placed in the middle of December and arrived just before New Year - again, about 2 weeks total time end to end. Very impressive. Soldering the ESP32 module on was more fiddly than the ESP-07, but it all worked first time with both 5V + 12V power supplies, so I’m very pleased with the results.
Being able to do cheap PCB assembly is a game changer for me. There are various things I feel confident enough to design for my own use that I’d never be able to solder up myself; and for these prices it’s well worth a try. I find myself currently looking at some of the basic STM32 offerings (many of them in JLCPCB’s basic component range) and pondering building a slightly more advanced dev board around one. I’m sure my PCB design will cause those I know in the industry to shudder, but don’t worry, I’ve no plans to do this other than for my own amusement!
As a reader of Planet Debian I see a bunch of updates at the start of each month about what people are up to in terms of their Free Software activities. I’m not generally active enough in the Free Software world to justify a monthly report, and this year in particular I’ve had a bunch of other life stuff going on, but I figured it might be interesting to produce a list of stuff I did over the course of 2019. I’m pleased to note it’s longer than I expected.
I’m not a big conference attendee; I’ve never worked somewhere that paid travel/accommodation for Free Software conferences so I end up covering these costs myself. That generally means I go to local things and DebConf. This year was no exception to that; I attended BelFOSS, an annual free software conference held in Belfast, as well as DebConf19 in Curitiba, Brazil. (FOSDEM was at an inconvenient time this year for me, or I’d have made it to that as well.)
Most of my contributions to Free software happen within Debian.
As part of the Data Protection Team I responded to various minor requests for advice from within the project.
The Debian Keyring was possibly my largest single point of contribution. We’re in a roughly 3 month rotation of who handles the keyring updates, and I handled 2019.03.24, 2019.06.25, 2019.08.23, 2019.09.24 + 2019.12.23.
I managed to get binutils-xtensa-lx106 + gcc-xtensa-lx106 packages (1 + 1) for cross building ESP8266 firmware uploaded in time for the buster release, as well as several updates throughout the year (2, 3 + 2, 3, 4). There was a hitch over some disagreements on the package naming, but it conforms with the generally accepted terms used for this toolchain.
Last year I ended up fixing an RC bug in ghdl, so this year having been the last person to touch the package I did a couple of minor uploads (0.35+git20181129+dfsg-3, 0.35+git20181129+dfsg-4). I’m no longer writing any VHDL as part of my job so my direct interest in this package is limited, but I’ll continue to try and fix the easy things when I have time.
Although I requested the package I originally uploaded it for, l2tpns, to be removed from Debian (#929610) I still vaguely maintain libcli, which saw a couple of upstream driven uploads (1.10.0-1, 1.10.2-1).
OpenOCD is coming up to 3 years since its last stable release, but I did a couple (0.10.0-5, 0.10.0-6) of minor uploads this year. I’ve promised various people I’ll do a snapshot upload and I’ll try to get that into experimental at some point. libjaylink, a dependency, also saw a couple of minor uploads (0.1.0-2, 0.1.0-3).
I pushed an updated version of libtorrent into experimental (0.13.8-1), as a pre-requisite for getting rtorrent updated. Once that had passed through NEW I uploaded 0.13.8-2 and then rtorrent 0.9.8-1.
sdcc was the only package I did sponsored uploads of this year - (3.8.0+dfsg-2, 3.8.0+dfsg-3). I don’t have time to take over maintainership of this package fully, but sigrok-firmware-fx2lafw depends on it to build so I upload for Gudjon and try to help him out a bit.
In terms of personal projects I finally pushed my ESP8266 Clock to the outside world (and wrote it up). I started learning Go and as part of that wrote gomijia, a tool to passively listen for Bluetooth LE broadcasts from Xiaomi Mijia devices and transmits them over MQTT. I continued to work on onak, my OpenPGP key server, adding support for the experimental v5 key format, dkg’s abuse resistant keystore proposal and finally merged in support for signature verification. It’s due a release, but the documentation really needs improved before I’d be happy to do that.
Back when picolibc was newlib-nano I had a conversation with Keith Packard about getting the ESP8266 newlib port (largely by Max Filippov based on the Tensilica work) included. Much time has passed since then, but I finally got time to port this over and test it this month. I’m hopeful the
picolibc-xtensa-lx106-elf package will appear in Debian at some point in the next few months.
As part of my work at Titan IC I did some work on Snort3, largely on improving its support for hardware offload accelerators (ignore the fact my listed commits were all last year, Cisco generally do a bunch of squashed updates to the tree so the original author doesn’t always show).
Software in the Public Interest
While I haven’t sat on the board of SPI since 2015 I’m still the primary maintainer of the membership website (with Martin Michlmayr as the other active contributor). The main work carried out this year was fixing up some issues seen with the upgrade from Stretch to Buster.
I talked about my home automation, including my use of Home Assistant, at NIDC 2019, and again at DebConf with more emphasis on the various aspects of Debian that I’ve used throughout the process. I had a couple of other sessions at DebConf with the Data Protection and Keyring teams. I did a brief introduction to Reproducible Builds for BLUG in October.
I had a one liner accepted to systemd to make my laptop keyboard work out of the box. I fixed up Xilinx XRT to be able to build .debs for Debian (rather than just Ubuntu), have C friendly header files and clean up some GCC 8.3 warnings. I submitted a fix to Home Assistant to accept 202 as a successful REST notification response. And I had a conversation on IRC which resulted in a tmux patch to force detach (literally I asked how do to this thing and I think Colin had whipped up a patch before the conversation was even over).
I voted this week. Twice, as it happens (vote early, vote often). This post is about my vote for Debian’s General Resolution: Init systems and systemd. You probably want to skip it, but I thought I’d write it up anyway.
[ 1 ] Choice 5: H: Support portability, without blocking progress [ 2 ] Choice 4: D: Support non-systemd systems, without blocking progress [ 3 ] Choice 7: G: Support portability and multiple implementations [ 4 ] Choice 3: A: Support for multiple init systems is Important [ 5 ] Choice 2: B: Systemd but we support exploring alternatives [ 6 ] Choice 1: F: Focus on systemd [ 7 ] Choice 6: E: Support for multiple init systems is Required [ 8 ] Choice 8: Further Discussion
Firstly, I’ve re-ordered the ballot in the order I ranked things in. I find the mix of numbers and letters that don’t match up confusing, and I think the ordering on the ballot indicates the bias of whoever did the ordering. I don’t think that’s intended to be anything other than helpful, but I’d have kept the numbers and letters matching in the expected order.
I made use of Ian Jackson’s voting guide (and should disclose that he and I have had conversations about this matter where he kindly took time to explain to me his position and rationale). However I’m more pro-systemd than he is, and also lazier, so hopefully this post is useful in some fashion rather than a simple rehash of anyone else’s logic.
I ranked Further Discussion last. I want this to go away. I feel it’s still sucking too much of the project’s time.
E was easy to rank as second last. While I want to support people who want to run non-systemd setups I don’t want to force us as a project to have to shoehorn that support in where it’s not easily done.
F third last. While I welcome the improvements brought by systemd I’m wary of buying into any ecosystem completely, and it has a lot of tentacles which will make any future move much more difficult if we buy in wholesale (and make life unnecessarily difficult for people who want to avoid systemd, and I’ve no desire to do that).
On the flip side I think those who want to avoid systemd should be able to do so within Debian. I don’t buy the argument that you can just fork and drop systemd there, it’s an invasive change that makes it much, much harder to produce a derivative system. So it’s one of those things we should care about as a project. (If you hate systemd so much you don’t want even its libraries on your system I can’t help you.)
I debated over my ordering for
D. I am in favour of portability principles, and I’m happy to make a statement that if someone is prepared to do the work of sorting out non-systemd support for a package then as a project we should take that. I read that as it’s not my responsibility as a maintainer to do these things (though obviously if I can easily do so I will), but that I shouldn’t get in the way of someone else doing so. As someone who has built things on top of Debian I subscribe to the idea that it should be suitable glue for such things (as well as something I can run directly on my own machines), so I favoured
G. I deferred to Ian here; I’d rather systemd wasn’t the de facto result despite best intentions, which results in placing
G first of the two.
subscribe via RSS