Let's talk about work/life balance in lock down

A SYNCNI article passed by on my Twitter feed this morning, talking about balancing work life balance while working from home in these times of COVID-19 inspired lock down. The associated Twitter thread expressed an interest in some words of advice from men to other men (because of course the original article has the woman having to do all the balancing).

This post does not contain the words of advice searched for, but it hopefully at least offers some reassurance that if you’re finding all of this difficult you’re not alone. From talking to others I don’t think there’s anything particularly special we’re doing in this house; a colleague is taking roughly the same approach, and some of the other folk I’ve spoken to in the local tech scene seem to be doing likewise.

First, the situation. Like many households both my wife and I work full time. We have a small child (not even a year and a half old yet). I work for a software startup, my wife is an HR business partner for a large multinational, dealing with employees all over the UK and Ireland. We’re both luckily able to work from home easily - our day to day work machines are laptops, our employers were already setup with the appropriate VPN / video conferencing etc facilities. Neither of us has seen any decrease in workload since lock down; there are always more features and/or bugs to work on when it comes to a software product, and, as I’m sure you can imagine, there has been a lot more activity in the HR sphere over the past 6 weeks as companies try to work out what to do.

On top of this our childcare arrangements, like everyone else’s, are completely gone. Nursery understandably shut down around the same time as the schools (slightly later, but not by much) and contact with grandparents is obviously out (which they’re finding hard). So we’re left with trying to fit 2 full time jobs in with full time childcare, of a child who up until recently tried to go down stairs by continuing to crawl forward.

Make no mistake, this is hard. I know we are exceptionally lucky in our situation, but that doesn’t mean we’re finding it easy. We’ve adopted an approach of splitting the day up. I take the morning slot (previously I would have got up with our son anyway), getting him up and fed while my wife showers. She takes over for a bit while I shower and dress, then I take over again in time for her to take her daily 8am conference call.

My morning is mostly taken up with childcare until nap time; I try to check in first thing to make sure there’s nothing urgent, and get a handle on what I might have to work on later in the day. My local team mates know they’re more likely to get me late morning and it’s better to arrange meetings in the afternoon. Equally I work with a lot of folk on the US West coast, so shifting my hours to be a bit later is not a problem there.

After nap time (which, if we’re lucky, takes us to lunch) my wife takes over. As she deals with UK/Ireland folk she often ends up having to take calls even while looking after our son; generally important meetings can be moved to the morning and meetings with folk who understand there might be a lot of pot banging going on in the background can happen in the afternoon.

Having started late I generally work late - past the point where I’d normally get home; if I’m lucky I pop my head in for bath time, but sometimes it’s only for a couple of minutes. We alternate cooking; usually based on work load + meetings. For example tonight I’m cooking while my wife catches up on some work after having put our son to bed. Last night I had a few meetings so my wife cooked.

So what’s worked for us? Splitting the day means we can plan with our co-workers. We always make sure we eat together in the evening, and that generally is the cut-off point for either of us doing any work. I’m less likely to be online in the evening because my study has become the place I work. That means I’m not really doing any of my personal projects - this definitely isn’t a case of being at home during lock down and having lots of time to achieve new things. It’s much more of case of trying to find a sustainable way to get through the current situation, however long it might last.

Building a very minimal initramfs

I’m working on trying to get a Qualcomm IPQ8064 device working properly. I’d got as far as an extremely hacked up chain of networking booting and the serial console working, so the next stage was to try and get a basic userland in place to try and make poking things a bit easier. While I have a cross-compiling environment all setup (that’s how I’m building my kernels) I didn’t really want to get into something complicated just to be able to poke around sysfs etc. I figured I could use a static BusyBox in an initramfs. I was right, but there were a couple of hiccups along the way so I’m writing it up so I remember next time.

First, I cheated and downloaded a prebuilt static binary from https://busybox.net/downloads/binaries/ (very convenient, thanks) - the IPQ8064 is a dual core ARMv7 so I used busybox-armv7l. There were various sites with init scripts to go along with this; I ended up with a variation of Star Brilliant’s gist:

Minimal init script
#!/bin/busybox sh

busybox mkdir -p /dev /etc /proc /root /sbin /sys /usr/bin /usr/sbin
busybox mount -t proc proc /proc
busybox mount -t sysfs sys /sys
busybox mount -t devtmpfs dev /dev
echo Starting up...
echo ttyMSM0::respawn:/sbin/getty -L ttyMSM0 115200 vt100 >> /etc/inittab
echo Login with root and no password. > /etc/issue
echo >> /etc/issue
echo root::0:0:root:/root:/bin/sh > /etc/passwd
busybox mdev -s
busybox --install
hostname localhost
ip link set lo up
echo 5 > /proc/sys/kernel/printk
exec /linuxrc

busybox and the init script aren’t sufficient. If you have any problems then you’re going to want a console device. Otherwise you’ll do what I did, forget to change one of the tty references in the script to the right device, and spend too long trying to figure out why you’re not getting any output as soon as you start running userspace. It turns out that CONFIG_DEVTMPFS_MOUNT isn’t sufficient for the initramfs (the Kconfig help even tells you that). So I need to create a /dev/console device file inside the initramfs too. And I don’t build as root (and you shouldn’t either) which makes creating a device node hard. It’s ok though, because the Linux kernel doesn’t need you to be root to build it, and it has a default initramfs with a console device in it. This is helpfully generated using usr/gen_init_cpio, which takes a file defining what you want to put in the cpio file that becomes your initramfs. I used the following:

Minimal initramfs creation file
# Simple busybox based initramfs

dir /dev 0755 0 0
nod /dev/console 0600 0 0 c 5 1
dir /root 0700 0 0
dir /bin 0755 0 0
dir /sbin 0755 0 0
file /init basic-init 0755 0 0
file /bin/busybox busybox-armv7l 0755 0 0

All that then needs to be done is linux/usr/gen_init_cpio initramfs.list > initramfs.cpio and the resulting initramfs.cpio is suitable for inclusion in your kernel. The precompiled binaries have a lot of stuff enabled, which makes for a pretty useful debugging environment with fairly minimal work.

New laptop: Walmart Motile M142

Over Christmas I found myself playing some Civilization V - I bought it for Linux some time ago on Steam but hadn’t played it a lot. So I fired it up to play in the background (one of the advantages of turn based games). Turns out it’s quite CPU intensive (the last Civilization I played was the original, which ran under DOS so there was no doing anything else while it ran anyway), and my 6 year old Dell E7240 couldn’t cope very well with switching back and forth. It still played well enough to be enjoyable, just not in a lightweight “while I’m doing other things” manner. On top of that the battery life on the E7240 hadn’t been that great; I’d had to replace the battery in November because the original was starting to bulge significantly and while the 3rd party battery I bought had much better life it was nowhere near the capacity of the old one when it was new.

The problem is I like subnotebooks, and my E7240 was mostly maxed out (i5-4300U, 16G RAM, 500G SATA SSD). A new subnotebook would have a much better CPU, and probably an NVMe SSD instead of SATA, but I’d still have the 16G RAM cap and if I’m looking for a machine to last me another 5 years that doesn’t seem enough. Then in January there were announcements about a Dell XPS 13 that would come with 32G. Perfect, thought I. 10th Gen i7, 32G, 1920x1200 screen in 13”. I have an XPS 13 for work and I’m very happy with it.

Then, while at FOSDEM I saw an article in Phoronix about a $200 Ryzen 3 (the M141) from Walmart. It looked like it would end up similar in performance to the E7240, but with a bit better battery life and for $200 it was well worth a shot (luckily I already had a work trip to the US planned for the middle of February, and the office is near a Walmart). Unfortunately I decided to sleep on it and when I woke up the price had jumped to $279. Not quite as appealing, but looking around their site I saw a Ryzen 5 variant (the M142) going for $400. It featured a Ryzen 5 3500U, which means 4 cores (8 threads), which was a much nicer boost over my i5. Plus AMD instead of Intel removes a whole set of the speculative execution issues that are going around. So I ordered one. And then it got cancelled a couple of days later because they claimed they couldn’t take payment. So I tried again, and that seemed to work. Total cost including taxes etc was about $440 (or £350 in real money).

Base spec as shipped is Ryzen 5 3500U, 8G RAM + 256G SATA m.2 SSD. Provided OS is Windows 10 Home. I managed to get it near the start of my US trip, and I’d brought a USB stick with the Debian installer on it, so I decided to reinstall. Sadly the Buster installer didn’t work - booted fine but the hardware discovery part took ages and generally seemed unhappy. I took the easy option and grabbed the Bullseye Alpha 1 netinst image instead (I run testing on my personal laptop, so this is what I was going to end up with). That worked fine (including, impressively, just working on the hotel wifi but I think that was partly because doing the T+Cs acceptance under Windows was remembered so I didn’t have to do it again to get routed access for the installer). I did need to manually install firmware-amd-graphics to make X happy, but everything else was smooth and I was able to use the laptop in the evenings for the rest of my trip.

The interesting thing to me about this laptop was that the RAM was easily upgradable, and there was some suggestion (though conflicting reports) that it might take 32G. It’s only got a single slot (so a single channel, which cripples things a bit especially with the built in graphics), but I found a Timetec 32G DDR4 SODIMM and ordered it to try out. It had arrived by the time I got home from the US and I eagerly installed it. Only to find the system was unreliable, so I went back to the 8G. Once I had a little more time I played again, running memtest86 (the UEFI variant) to test the RAM and hitting no problems. So I tried limiting memory to 16G (mem=16G on the Linux command line), no issues even while compiling kernels. 24G, no issues. Didn’t limit it at all, no issues (so far). So I don’t know if I missed something the first time round such as cooling issues, or if it’s something else entirely that was the issue then. The BIOS detects the 32G just fine though, so there’s obviously support there it just might be a bit picky about RAM type.

Next thing was I had hoped to transplant the drive from my old laptop across; 500G has been plenty for my laptop so I didn’t feel the need to upgrade. Except the old machine was old enough it was an mSATA drive, which wouldn’t fit. I’ve a spare Optane drive so I was hoping I’d be able to use that, but it’s a 22110 form factor and while the M142 has 2 m.2 slots they’re both only 2280. Also the Optane turned out to not be detected by the BIOS when I set it in. So I had to order a new m.2 drive and ended up with a 1T WD Blue, which has worked just fine so far.

How do I find it? It’s worked out a bit pricer overall than I hoped (about £550 once I bought the RAM and the SSD) but I’ve ended up with twice the RAM, twice the disk space and twice the cores of my old machine (for probably less than half what I paid for it). Civ V is still slower than I’d like, but it’s much happier about multitasking (and the fans don’t spin up so much). The machine is a bit bigger, but not terribly so (about 1cm wider) and for a cheap laptop it is light (I had it and my work laptop on my flight home in my hand baggage and it wasn’t uncomfortably heavy). The USB-C isn’t proper Thunderbolt, so I can’t dock with one cable, but the power/HDMI/USB3 are beside each other and easy enough to connect. Plus it’s HDMI 2.0 which means I can drive my almost-4K monitor at 60Hz without problems (my old laptop was slightly out of spec driving the monitor and would get unhappy now and then). Other than Civ V I’m not really noticing the CPU boost, but day to day I wasn’t expecting to. And while it’s mostly on my desk and mains powered powertop is estimating nearly 6 hours of battery life. Not the full working day I can get out of XPS 13, but pretty respectable for the price point. So, yeah, I’m pretty happy with the purchase. My only complaint would be the keyboard isn’t great, but I’m mostly using an external one and the internal one is fine on the move.

The Dell XPS 13 with 32GB? Still not available on Dell’s site at the time of writing. I won’t be holding my breath.

Hardware, testing and time

This week I fixed a bug that dated back to last May. It was in a piece of hardware I assembled, running firmware I wrote most of. And it had been in operation since May without me noticing the issue.

What was the trigger that led to me discovering the bug’s existence? The colder temperatures. See, the device in question is a Digispark/433MHz receiver/USB serial dongle combo that listens for broadcasts from a Digoo DG-R8H wireless temperature/humidity weather station monitor. This is placed outside, giving me external temperature data to feed into my home automation setup.

The thing is, while Belfast is often cold and wet, it’s rarely really cold. So up until recently the fact I never saw sub-zero temperatures reported could just be attributed to the fact the sensor is on a window sill and the house probably has enough residual heat and it’s sheltered enough that it never actually got below zero. And then there were a few days where it obviously did and that wasn’t reflected in the results and so I scratched my head and dug out the code.

It was obvious when I looked what the issue was; I made no attempt to try and deal with negative temperatures. My excuse for this is that my DS18B20 1-Wire temperature sensor code didn’t make any attempt to deal with negative temperatures either - it didn’t need to, as those are all deployed inside my home and if the temperature gets towards zero the heating is turned on. So first mistake; not thinking about the fact the external sensor was going to have a different set of requirements/limits than the internal one.

Secondly, when I looked at the code closely, it wasn’t clear to me how I’d ever been getting the right value. The temperature is a 12 bit value in the middle of a 36 bit data stream, so there’s a shift and mask to extract it for printing. I strip out 4 bits which are always one, which makes things fit nicely into a 32 bit variable. Except I failed to take that into account for the temperature shift - all the other pieces of information were correctly handled, just not the temperature.

Now when I mentioned this on IRC Lars Wirzenius helpfully piped up with something along the lines of “Of course your test suite caught this for you”. I’ve got a bunch of excuses here; part of it is about the fact that once you involve hardware doing a full end-to-end test becomes a lot harder, part of it is about the fact this is a personal project and writing the code is more fun than writing an exhaustive test suite and part of it is about the fact I obviously just never wrote the negative temperature support and I’d not have written a test for it either most probably.

Also I did actually perform some testing before putting it into service. Values looked sane when compared to it sitting inside on my desk, and it sitting outside on the window sill. It varied over time as I’d expect, with overnight temperatures being lower than during the day. I even had a weird issue where I was seeing a daily peak at around 7am which I investigated and realised was a result of that being the point where sunlight would bounce off another window and shine directly on the Digoo device.

So what additional testing did I do this time, to make sure I’d fixed the issue properly? I put the Digoo in the freezer…

Anyway, my attempt to take some generally useful lessons from this experience are as follows:

  • Not implementing functionality is fine, but at least make a note (file a ticket if it’s a formalise project) of the fact. I’m a big fan of low priority feature bugs to track these things.
  • Developer driven testing will miss things. A second pair of eyes / a suitably devious test mind would have thought of this sort of thing very quickly. Unit tests are great, but I’ve a big belief that QA is a distinctly different skill to development and you need both in your team.
  • 20+ years of experience with bit shifting doesn’t mean you won’t mess it up in a non-obvious way.
  • Testing that involves hardware can provide trickier problems than pure software testing.

(Oh, and if you really care, I fixed the DS18B20 negative temperature support for completeness.)

A beginner tries PCB assembly

I wrote last year about my experience with making my first PCB using JLCPCB. I’ve now got 5 of the boards in production around my house, and another couple assembled on my desk for testing. I also did a much simpler board to mount a GPS module on my MapleBoard - basically just with a suitable DIP connector and mount point for the GPS module. At that point I ended up having to pay for shipping; not being in a hurry I went for the cheapest option which mean the total process took 2 weeks from order until it arrived. Still not bad for under $8!

Just before Christmas I discovered that JLCPCB had expanded their SMT assembly option to beyond the Chinese market, and were offering coupons off (but even without that had much, much lower assembly/setup fees than anywhere else I’d seen). Despite being part of LCSC the parts library can be a bit limited (partly it seems there’s nothing complex to assemble such as connectors), with a set of “basic” components without setup fee and then “extended” options which have a $3 setup fee (because they’re not permanently loaded, AIUI).

To test out the service I decided to revise my IoT board. First, I’ve used a few for 12V LED strip control which has meant the 3.3V LDO is working harder than ideal, so I wanted to switch (ha ha) to a buck converter. I worked back from the JLCPCB basic parts list and chose an MP2451, which had a handy data sheet with an example implementation. I also upgraded the ESP module to an ESP32-WROOM - I’ve had some issues with non-flickery PWM on the ESP8266 and the ESP32 has hardware PWM. I also have some applications the Bluetooth would be useful for. Once again I turned to KiCad to draw the schematic and lay out the board. I kept the same form factor for ease, as I knew I could get a case for it. The more complex circuitry was a bit harder to lay out in the same space, and the assembly has a limitation of being single sided which complicates things further, but the fact it was being done for me meant I could drop to 0603 parts.

All-in-all I ended up with 17 parts for the board assembly, with the ESP32 module and power connectors being my responsibility (JLCPCB only have the basic ESP32 chip and I did not feel like trying to design a PCB antenna). I managed to get everything except the inductor from the basic library, which kept costs down. Total cost for 10 boards, parts, assembly, shipping + customs fees was just under $29 which is amazing value to me. What’s even better is that the DFM (design for manufacturing) checks they did realised I’d placed the MP2451 the wrong way round and they automatically rotated it 180° for me. Phew!

The order was placed in the middle of December and arrived just before New Year - again, about 2 weeks total time end to end. Very impressive. Soldering the ESP32 module on was more fiddly than the ESP-07, but it all worked first time with both 5V + 12V power supplies, so I’m very pleased with the results.

ESP32 IoT PCB

Being able to do cheap PCB assembly is a game changer for me. There are various things I feel confident enough to design for my own use that I’d never be able to solder up myself; and for these prices it’s well worth a try. I find myself currently looking at some of the basic STM32 offerings (many of them in JLCPCB’s basic component range) and pondering building a slightly more advanced dev board around one. I’m sure my PCB design will cause those I know in the industry to shudder, but don’t worry, I’ve no plans to do this other than for my own amusement!

subscribe via RSS