We fear undocumented change

I love revision control. I love the ability to track changes over time, whether that be to see why I changed something in the past, or to see why a particular thing has stopped working, or to see if a particular thing is fixed in a more recent version than the one I’m using.

However I have a few opinions about the use of revision control that are obviously not shared by other people. Here are a few of them:

  • One change per changeset.

    The only argument I can see against this is laziness. Changesets are cheap. Checking in multiple things in a single go makes it hard to work out exactly which piece of code fixes which problem. I’m fine with a big initial drops of code if logically it all needs to go together, but changesets that bundle up half a dozen different fixes piss me off.

  • Descriptive changeset comments.

    Don’t make me guess what you changed. Tell me. Bug numbers are not sufficient (though including them is really helpful).

  • Comments in the changeset, not per file.

    I’ve only seen this with BitKeeper; you can have per file comments and then an overall changeset comment. At first I thought this was quite neat, because you can explain each part of a change. Now it just annoys me, because I want the relevant detail in one place rather than having to drill down to a per file level to figure out what’s going on.

  • The tree should always compile.

    There are people I respect who are all for checking in all the time throughout development no matter what the status. I have to disagree, at least for anything that’s available to other people. The tree should always compile. This avoids pissing off your coworkers (especially if they’re in a different timezone) and means you can do things like git bisect more easily. Plus it shows you’ve at least done minimal testing.

  • Don’t hide your tree.

    I like centralised locations for master trees. It means I can make an educated guess about where to look first for information about changes. Trees that live in obscure network shares or, worse, someone’s home directory aren’t helpful. While I may not always agree with the choice of VCS for the centralised service as long as it’s actually fit for purpose I think it makes much more sense to use it than go off on a separate path that’s less obvious for others to find.

Why Linux? (Part 5: Flexibility)

(This is part of a series of posts on Why Linux?)

I find Linux more flexible. Maybe that’s the familiarity showing, maybe it’s about the package management, but it’s a powerful reason for me to use it.

For example, a couple of years ago I wanted to try out some iSCSI stuff against a SAN. Of course I have test boxes available I can do this on, but this was just to try out a few bits and pieces rather than anything more concrete. So I installed open-iscsi on my desktop and was able to merrily do the tests I wanted with very little additional work.

Or I wanted to try out some BitKeeper to git conversion work recently. I wasn’t sure how much resource it would take on a build server, and didn’t want to tie things up there. So I ran it on my desktop overnight, where I could easily setup the appropriate environment and wouldn’t impact on anyone else’s resources.

Problems talking to dodgy hardware? Linux is much better about giving you some idea what’s going on, without needing to install extra software. I had a workmate grappling with an old USB music player recently; hooking it up to her Windows laptop wasn’t providing a lot of joy so I attached it to my Linux box and was able to see that it did identify ok, but was disconnecting randomly at times too.

Want to script querying an AD server for the current employee list and displaying who’s joined and who’s left since the last time you did so? I found that easy enough with the common Linux LDAP tools. I’m sure it’s doable under Windows too, but I’m not sure it would be quite so simple. For bonus points add graphviz into the mix for automatic organisation charts (modulo accuracy of the AD data).

This flexibility is something that helps me do my job. Sure, as I mentioned above I do have access to test boxes that I can use for this, but being able to do it on my desktop can be useful - for example if I’m offline, or on a slow network connection, or just geographically distant from my test machines so network latency is higher than I’d like.

(Also, it’s something that makes a Linux box a really great test box. I’m lucky in that I have a mix of OSes available to me for testing, but the one that I use most often is the Debian box. Much easier to get and install decent diagnosis tools for it that can give me packet level dumps, or do really odd stuff that turns out to be really useful.)

Why Linux? (Part 4: Package Management)

(This is part of a series of posts on Why Linux?)

I’ve run a number of distros in my time. I ended up on Debian near the end of 1999, and part of the drive for that was the number of packages available in one centralised location. Decent package management is a definite strength of Linux (or FreeBSD) over proprietary operating systems. It derives from the freedom aspect, but means you can end up with one source for all (or most) of your software, that’s compiled against the same set of libraries, with one way to track what owns what.

This may not seem like a big thing, especially if you’re a hobbyist or are coming from a Windows background. Reinstalling is often seen as a necessary regular requirement. Personally I’ve got better things to do with my time. If I want to try out a piece of software I want to be able to install it safe in the knowledge I know exactly what files it owns and where there are. And I want it to be able to tell me what other common components it needs that I might not already have. Then if I decide it’s not for me I can cleanly remove it and anything else it pulled in that I no longer need.

Don’t underestimate this. This is useful on all of my machines. I can query the version number of everything installed. I can check for updates with one command (no need for every piece of installed software to have its own updater implementation). Software can share libraries correctly rather than stashing their own private copies, meaning I get bug fixes and security updates. (Yes, sometimes authors bundle even in the Linux world. Stop it.)

I’m a developer. I tend to interact with a lot of different systems, of different types. It’s really handy to have access to a wide range of tools to help me with that, know that there’s legally no problem with me installing them, be able to do so with a single command and, should they turn out to be unsuitable, know I can cleanly remove them with another single command. This is a definite win in the work context.

Equally I’ve been a sysadmin for multiple machines at once. Being able to login to each of them and check that everything is up to date is damn handy. Being able to easily install software for customers tends to make you popular too. And being able to rebuild boxes (or build additional boxes to share load) with the same setup is a lot easier with a decent package manager too.

And, to pre-empt any responses about how a lot of this is possible under, say, Windows, yes, it is. I’ve spent some time in the past building packages for commercial deployment using Novadigm’s Radia tool. I’m aware that Windows integral package management has also got better over time. I still think dpkg/apt (or rpm/yum) is far more powerful. And, for the end user, mostly easier as well - distros are building pre-prepared packages for you, rather than you having to do it yourself like with Radia.

Contract free phones are the way forward

Russell complains about locked down phones and horrible telcos, in particular about not getting a discount on your monthly contract if you don’t get a phone with it.

This hasn’t been my experience, either in the UK or since I moved to the US. In the UK I ended up on an O2 Simplicity (month-by-month) plan which provided more minutes, SMSes and data allowance that I needed for £20/month (note that I only use the data for the phone, I didn’t tether it to my laptop). Originally I chose this because I wasn’t sure about coverage where I lived (that’s why I was changing provider), but it turned out to be a pretty good deal, saving me at least £10/month over a contract that I’d have been tied into. When the G1 was launched I wasn’t interested in moving to T-Mobile, who I knew had no 3G coverage outside of Belfast, so I ended up with one off eBay (received as a gift) and kept my O2 contract.

When I moved to the US I signed up to Simple Mobile mainly because I could get a SIM from eBay before I left the UK, and it was PAYG (so the fact I’d no credit record didn’t matter) but still included unlimited voice/SMS/data. Significantly more expensive at $60/month than I was used to paying in the UK, but seemed to be the going rate even for a contract.

Then the G2 launched back in October. I resisted for 2 or 3 weeks, then decided it had to be mine. The G1’s battery was even worse than it had been (to be fair it had lasted 2 years), and although Cyanogen provided Android 2.2 the hardware isn’t really up to it. I decided to go with T-Mobile; Simple Mobile use their network, so I knew the coverage would be fine, and I figured a contract was probably a good way to help get a credit record here.

Except, the pricing was a bit weird. $200 for the phone with a 2 year $80/month contract or $500 for the phone with an identical contract but no tie in and $60/month. Er, what? I pay up front and I save $180 and I can walk away whenever I want? Ok.

As it turned out this was the smart choice. Firstly $60/month means $60/month plus taxes[0], so I was paying more than I paid Simple Mobile. I figured I could bear that for a few months to get the credit history, plus the free network unlock after 3 months. Except then it became clear that international SMS wasn’t included in the unlimited SMS (it is with Simple). Most of my SMS is international. Now, T-Mobile have a $5/month bolt on to cover that, but not if you’re on their flexpay scheme because they found themselves unable to verify your SSN. So I cancelled the contract after the first month and moved back to Simple. I didn’t even need to unlock the phone due to the fact it’s the same network (though I have now in preparation for my trip back to the UK over Christmas). Surprisingly T-Mobile didn’t try and keep me by sorting out the international SMS bolt on. I guess US mobile customers are used to being screwed over (certainly the pricing suggests that).

Er, sorry, that turned into a bit of a T-Mobile rant. My original point was that all of my recent mobile contracts have been month by month, not involved a subsidised phone, and saved me money over being tied in. And even if they hadn’t my experiences with the flexibility offered by not being tied in (worries about coverage, discovering the deal isn’t as good as you thought) mean that I’m pretty much convinced that contract-free is the way to go anyway.

[0] Dear America, for all your complaints about VAT it’s not really a lot different from sales tax and at least the prices in shops/online actually include it. Also it’s the same everywhere in the country.

Why Linux? (Part 3: It's cheap)

(This is part of a series of posts on Why Linux?)

Linux, to me as an end user, is cheap. Even taking into account the fact most PCs come with a Windows licence included it’s still cheaper for me to run Linux. I paid Steve minimal amounts for my first set of Debian install CDs. These days I can burn my own netinst CDs and pull the rest over the internet. I legally have access to a tremendous range of excellent software, for nothing. All the apps I need are available without having to shell out more. How is that not awesome? I get free updates, both for bugs and also for major features. I’m not left with the option of paying lots of money for the latest and greatest, or dealing with an unsupported old release with known bugs.

The counter argument from my Windows using friends is often about how they didn’t pay for their Windows updates nor their copy of Office. I’m unimpressed with anyone who tells me Windows is a better option, but is unprepared to pay for it. If you have to illegally obtain it in order for it to compete with Linux then you’re not really comparing on equal terms, are you? Also don’t tell me that Free software takes away jobs from software engineers and then pirate software, eh?

Cost isn’t just about the money though. I’ve put many hours into being involved in Debian. I’ve provided project resources when I was in a position to do so. I’ve contributed to the Linux kernel. Not quite the same as paying for it, but I think does indicate that I’m trying to give back a little too. I also accept that at an organisational level the basic cost of the software licences is often negligible compared to things like hardware, training and support.

I still think cost is a compelling argument for the home user, and for decisions at an organisational level. As mentioned I realise there are issues with training and support, but I don’t believe these costs are any higher than for alternative OSes. Linux also makes it remarkably easy to remotely administer machines, and perform common actions across an entire installed estate, without needing extra bolt ons from 3rd parties.

Cost doesn’t provide sufficient justification for an individual desktop in an organisation that has site licences for an alternative however (and in fact running Linux requires extra work on my part to do the install and maintenance compared to allowing central IT to manage my machine). So that’s not a good enough reason.

subscribe via RSS