(This is part of a series of posts on Why Linux?)
I think of myself as reasonably pragmatic in my approach to Free/Open Source software. I don’t get worked up over which set of language people want to use. I use devices that require binary firmware to be downloaded to them (because just because I can’t see it doesn’t mean it doesn’t exist). I have non-free in my
And yet, talking to other Linux users these days, I realize I’m much more of a Freedom nut job than average. I want the source, be it for a driver, a minor widget, or a full app. I don’t buy nVidia. I will sacrifice a degree of functionality in order to get Free. And while I think WINE is an excellent piece of software, I think the best end result is that it’s no longer necessary, not that it’s a perfect implementation of the ABI.
How does any of this help justify my use of Linux in the work place? As previously mentioned, I’m a developer. Most developers don’t operate in a vacuum; they have to inter-operate with other ecosystems. And usually somewhere along the line there’s a failure to document exactly how something is handled, or an ambiguity about what exact choice might be taken. If I have access to the source then I can check that out for myself. If I don’t, I have to guess. As an example, a long time ago I was involved in writing a serial console driver for QNX. There came a point where the behaviour wasn’t quite as we’d expect. Although the organisation had a license for the source, I wasn’t allowed to look at it. Instead I had to come up with a series of suitable questions that someone who could look at the source could answer without violating any NDAs. If I’d been able to look at the source directly we’d have all saved a lot of time. And that’s an example where someone could look at the source, rather than having to make a bunch of guesses and instrument tests to see which was right.
Access to the Linux source has helped me in other commercial contexts too. At Black Cat we were able to take advantage of patches like grsecurity in order to tighten up shell account boxes. I wrote the IPv6 support for l2tpns, because we had access to the source and could. I’ve been able to look at the source to understand exactly what SCSI responses are sent in certain circumstances too (or understand exactly what the error that a user land test program was getting back meant).
Also I’m a big believe in Linus’ Law. I do think that good Free software is much better than proprietary software (there’s some really bad Free software out there though, I’m not disputing that). The fact that smart people can look at it and scratch whatever their itch is means that we get a gradual process of improvement that can’t be ignored. Equally as long as someone has an interest in the software, end users can’t be left high and dry by organisations abandoning still users applications. I think that should be a powerful driver to business to look towards Free software.
(Before my more astute readers point it out; yes, I am employed writing non-free software. See the first sentence. One day I’ll find a job working on Free software that ticks enough of the other boxes to be viable.)
I love revision control. I love the ability to track changes over time, whether that be to see why I changed something in the past, or to see why a particular thing has stopped working, or to see if a particular thing is fixed in a more recent version than the one I’m using.
However I have a few opinions about the use of revision control that are obviously not shared by other people. Here are a few of them:
One change per changeset.
The only argument I can see against this is laziness. Changesets are cheap. Checking in multiple things in a single go makes it hard to work out exactly which piece of code fixes which problem. I’m fine with a big initial drops of code if logically it all needs to go together, but changesets that bundle up half a dozen different fixes piss me off.
Descriptive changeset comments.
Don’t make me guess what you changed. Tell me. Bug numbers are not sufficient (though including them is really helpful).
Comments in the changeset, not per file.
I’ve only seen this with BitKeeper; you can have per file comments and then an overall changeset comment. At first I thought this was quite neat, because you can explain each part of a change. Now it just annoys me, because I want the relevant detail in one place rather than having to drill down to a per file level to figure out what’s going on.
The tree should always compile.
There are people I respect who are all for checking in all the time throughout development no matter what the status. I have to disagree, at least for anything that’s available to other people. The tree should always compile. This avoids pissing off your coworkers (especially if they’re in a different timezone) and means you can do things like git bisect more easily. Plus it shows you’ve at least done minimal testing.
Don’t hide your tree.
I like centralised locations for master trees. It means I can make an educated guess about where to look first for information about changes. Trees that live in obscure network shares or, worse, someone’s home directory aren’t helpful. While I may not always agree with the choice of VCS for the centralised service as long as it’s actually fit for purpose I think it makes much more sense to use it than go off on a separate path that’s less obvious for others to find.
(This is part of a series of posts on Why Linux?)
I find Linux more flexible. Maybe that’s the familiarity showing, maybe it’s about the package management, but it’s a powerful reason for me to use it.
For example, a couple of years ago I wanted to try out some iSCSI stuff against a SAN. Of course I have test boxes available I can do this on, but this was just to try out a few bits and pieces rather than anything more concrete. So I installed open-iscsi on my desktop and was able to merrily do the tests I wanted with very little additional work.
Or I wanted to try out some BitKeeper to git conversion work recently. I wasn’t sure how much resource it would take on a build server, and didn’t want to tie things up there. So I ran it on my desktop overnight, where I could easily setup the appropriate environment and wouldn’t impact on anyone else’s resources.
Problems talking to dodgy hardware? Linux is much better about giving you some idea what’s going on, without needing to install extra software. I had a workmate grappling with an old USB music player recently; hooking it up to her Windows laptop wasn’t providing a lot of joy so I attached it to my Linux box and was able to see that it did identify ok, but was disconnecting randomly at times too.
Want to script querying an AD server for the current employee list and displaying who’s joined and who’s left since the last time you did so? I found that easy enough with the common Linux LDAP tools. I’m sure it’s doable under Windows too, but I’m not sure it would be quite so simple. For bonus points add graphviz into the mix for automatic organisation charts (modulo accuracy of the AD data).
This flexibility is something that helps me do my job. Sure, as I mentioned above I do have access to test boxes that I can use for this, but being able to do it on my desktop can be useful - for example if I’m offline, or on a slow network connection, or just geographically distant from my test machines so network latency is higher than I’d like.
(Also, it’s something that makes a Linux box a really great test box. I’m lucky in that I have a mix of OSes available to me for testing, but the one that I use most often is the Debian box. Much easier to get and install decent diagnosis tools for it that can give me packet level dumps, or do really odd stuff that turns out to be really useful.)
(This is part of a series of posts on Why Linux?)
I’ve run a number of distros in my time. I ended up on Debian near the end of 1999, and part of the drive for that was the number of packages available in one centralised location. Decent package management is a definite strength of Linux (or FreeBSD) over proprietary operating systems. It derives from the freedom aspect, but means you can end up with one source for all (or most) of your software, that’s compiled against the same set of libraries, with one way to track what owns what.
This may not seem like a big thing, especially if you’re a hobbyist or are coming from a Windows background. Reinstalling is often seen as a necessary regular requirement. Personally I’ve got better things to do with my time. If I want to try out a piece of software I want to be able to install it safe in the knowledge I know exactly what files it owns and where there are. And I want it to be able to tell me what other common components it needs that I might not already have. Then if I decide it’s not for me I can cleanly remove it and anything else it pulled in that I no longer need.
Don’t underestimate this. This is useful on all of my machines. I can query the version number of everything installed. I can check for updates with one command (no need for every piece of installed software to have its own updater implementation). Software can share libraries correctly rather than stashing their own private copies, meaning I get bug fixes and security updates. (Yes, sometimes authors bundle even in the Linux world. Stop it.)
I’m a developer. I tend to interact with a lot of different systems, of different types. It’s really handy to have access to a wide range of tools to help me with that, know that there’s legally no problem with me installing them, be able to do so with a single command and, should they turn out to be unsuitable, know I can cleanly remove them with another single command. This is a definite win in the work context.
Equally I’ve been a sysadmin for multiple machines at once. Being able to login to each of them and check that everything is up to date is damn handy. Being able to easily install software for customers tends to make you popular too. And being able to rebuild boxes (or build additional boxes to share load) with the same setup is a lot easier with a decent package manager too.
And, to pre-empt any responses about how a lot of this is possible under, say, Windows, yes, it is. I’ve spent some time in the past building packages for commercial deployment using Novadigm’s Radia tool. I’m aware that Windows integral package management has also got better over time. I still think dpkg/apt (or rpm/yum) is far more powerful. And, for the end user, mostly easier as well - distros are building pre-prepared packages for you, rather than you having to do it yourself like with Radia.
This hasn’t been my experience, either in the UK or since I moved to the US. In the UK I ended up on an O2 Simplicity (month-by-month) plan which provided more minutes, SMSes and data allowance that I needed for Â£20/month (note that I only use the data for the phone, I didn’t tether it to my laptop). Originally I chose this because I wasn’t sure about coverage where I lived (that’s why I was changing provider), but it turned out to be a pretty good deal, saving me at least Â£10/month over a contract that I’d have been tied into. When the G1 was launched I wasn’t interested in moving to T-Mobile, who I knew had no 3G coverage outside of Belfast, so I ended up with one off eBay (received as a gift) and kept my O2 contract.
When I moved to the US I signed up to Simple Mobile mainly because I could get a SIM from eBay before I left the UK, and it was PAYG (so the fact I’d no credit record didn’t matter) but still included unlimited voice/SMS/data. Significantly more expensive at $60/month than I was used to paying in the UK, but seemed to be the going rate even for a contract.
Then the G2 launched back in October. I resisted for 2 or 3 weeks, then decided it had to be mine. The G1’s battery was even worse than it had been (to be fair it had lasted 2 years), and although Cyanogen provided Android 2.2 the hardware isn’t really up to it. I decided to go with T-Mobile; Simple Mobile use their network, so I knew the coverage would be fine, and I figured a contract was probably a good way to help get a credit record here.
Except, the pricing was a bit weird. $200 for the phone with a 2 year $80/month contract or $500 for the phone with an identical contract but no tie in and $60/month. Er, what? I pay up front and I save $180 and I can walk away whenever I want? Ok.
As it turned out this was the smart choice. Firstly $60/month means $60/month plus taxes, so I was paying more than I paid Simple Mobile. I figured I could bear that for a few months to get the credit history, plus the free network unlock after 3 months. Except then it became clear that international SMS wasn’t included in the unlimited SMS (it is with Simple). Most of my SMS is international. Now, T-Mobile have a $5/month bolt on to cover that, but not if you’re on their flexpay scheme because they found themselves unable to verify your SSN. So I cancelled the contract after the first month and moved back to Simple. I didn’t even need to unlock the phone due to the fact it’s the same network (though I have now in preparation for my trip back to the UK over Christmas). Surprisingly T-Mobile didn’t try and keep me by sorting out the international SMS bolt on. I guess US mobile customers are used to being screwed over (certainly the pricing suggests that).
Er, sorry, that turned into a bit of a T-Mobile rant. My original point was that all of my recent mobile contracts have been month by month, not involved a subsidised phone, and saved me money over being tied in. And even if they hadn’t my experiences with the flexibility offered by not being tied in (worries about coverage, discovering the deal isn’t as good as you thought) mean that I’m pretty much convinced that contract-free is the way to go anyway.
 Dear America, for all your complaints about VAT it’s not really a lot different from sales tax and at least the prices in shops/online actually include it. Also it’s the same everywhere in the country.
subscribe via RSS