My Squeeze upgrade notes
I did my first upgrade of a lenny box to squeeze today; a test server in work. All went pretty smoothly but I had a handful of things I had to frob manually that I thought I’d write up here:
- collectd SNMP plugin needs MIBs or it won’t start: installed
snmp-mibs-downloader
from non-free autofs
starts beforenis
(#470573): addedypbind
to the Required-Start: line in/etc/init.d/autofs
and re-raninsserv
- NFS automounts now using NFSv4 instead of v3 and user/group mapping setting everything to nobody: Set
NEED_IDMAPD=yes
in/etc/default/nfs-common
and ensure Domain in/etc/idmapd.conf
is set to the correct domain name (look in/var/log/daemon.log
for lines from rpc.idmapd saying “does not map into domain” to find out what this should be if you don’t know).
I think that’s pretty smooth overall; kudos to all those involved. I’ve a few more boxes to upgrade, but they’re all more likely to have people complaining at me if there are hiccups so they’ll have to wait until I have a suitable block of time set aside.
Why Linux? (Part 6: Freedom)
(This is part of a series of posts on Why Linux?)
I think of myself as reasonably pragmatic in my approach to Free/Open Source software. I don’t get worked up over which set of language people want to use. I use devices that require binary firmware to be downloaded to them (because just because I can’t see it doesn’t mean it doesn’t exist). I have non-free in my sources.list
.
And yet, talking to other Linux users these days, I realize I’m much more of a Freedom nut job than average. I want the source, be it for a driver, a minor widget, or a full app. I don’t buy nVidia. I will sacrifice a degree of functionality in order to get Free. And while I think WINE is an excellent piece of software, I think the best end result is that it’s no longer necessary, not that it’s a perfect implementation of the ABI.
How does any of this help justify my use of Linux in the work place? As previously mentioned, I’m a developer. Most developers don’t operate in a vacuum; they have to inter-operate with other ecosystems. And usually somewhere along the line there’s a failure to document exactly how something is handled, or an ambiguity about what exact choice might be taken. If I have access to the source then I can check that out for myself. If I don’t, I have to guess. As an example, a long time ago I was involved in writing a serial console driver for QNX. There came a point where the behaviour wasn’t quite as we’d expect. Although the organisation had a license for the source, I wasn’t allowed to look at it. Instead I had to come up with a series of suitable questions that someone who could look at the source could answer without violating any NDAs. If I’d been able to look at the source directly we’d have all saved a lot of time. And that’s an example where someone could look at the source, rather than having to make a bunch of guesses and instrument tests to see which was right.
Access to the Linux source has helped me in other commercial contexts too. At Black Cat we were able to take advantage of patches like grsecurity in order to tighten up shell account boxes. I wrote the IPv6 support for l2tpns, because we had access to the source and could. I’ve been able to look at the source to understand exactly what SCSI responses are sent in certain circumstances too (or understand exactly what the error that a user land test program was getting back meant).
Also I’m a big believe in Linus’ Law. I do think that good Free software is much better than proprietary software (there’s some really bad Free software out there though, I’m not disputing that). The fact that smart people can look at it and scratch whatever their itch is means that we get a gradual process of improvement that can’t be ignored. Equally as long as someone has an interest in the software, end users can’t be left high and dry by organisations abandoning still users applications. I think that should be a powerful driver to business to look towards Free software.
(Before my more astute readers point it out; yes, I am employed writing non-free software. See the first sentence. One day I’ll find a job working on Free software that ticks enough of the other boxes to be viable.)
We fear undocumented change
I love revision control. I love the ability to track changes over time, whether that be to see why I changed something in the past, or to see why a particular thing has stopped working, or to see if a particular thing is fixed in a more recent version than the one I’m using.
However I have a few opinions about the use of revision control that are obviously not shared by other people. Here are a few of them:
-
One change per changeset.
The only argument I can see against this is laziness. Changesets are cheap. Checking in multiple things in a single go makes it hard to work out exactly which piece of code fixes which problem. I’m fine with a big initial drops of code if logically it all needs to go together, but changesets that bundle up half a dozen different fixes piss me off.
-
Descriptive changeset comments.
Don’t make me guess what you changed. Tell me. Bug numbers are not sufficient (though including them is really helpful).
-
Comments in the changeset, not per file.
I’ve only seen this with BitKeeper; you can have per file comments and then an overall changeset comment. At first I thought this was quite neat, because you can explain each part of a change. Now it just annoys me, because I want the relevant detail in one place rather than having to drill down to a per file level to figure out what’s going on.
-
The tree should always compile.
There are people I respect who are all for checking in all the time throughout development no matter what the status. I have to disagree, at least for anything that’s available to other people. The tree should always compile. This avoids pissing off your coworkers (especially if they’re in a different timezone) and means you can do things like git bisect more easily. Plus it shows you’ve at least done minimal testing.
-
Don’t hide your tree.
I like centralised locations for master trees. It means I can make an educated guess about where to look first for information about changes. Trees that live in obscure network shares or, worse, someone’s home directory aren’t helpful. While I may not always agree with the choice of VCS for the centralised service as long as it’s actually fit for purpose I think it makes much more sense to use it than go off on a separate path that’s less obvious for others to find.
Why Linux? (Part 5: Flexibility)
(This is part of a series of posts on Why Linux?)
I find Linux more flexible. Maybe that’s the familiarity showing, maybe it’s about the package management, but it’s a powerful reason for me to use it.
For example, a couple of years ago I wanted to try out some iSCSI stuff against a SAN. Of course I have test boxes available I can do this on, but this was just to try out a few bits and pieces rather than anything more concrete. So I installed open-iscsi on my desktop and was able to merrily do the tests I wanted with very little additional work.
Or I wanted to try out some BitKeeper to git conversion work recently. I wasn’t sure how much resource it would take on a build server, and didn’t want to tie things up there. So I ran it on my desktop overnight, where I could easily setup the appropriate environment and wouldn’t impact on anyone else’s resources.
Problems talking to dodgy hardware? Linux is much better about giving you some idea what’s going on, without needing to install extra software. I had a workmate grappling with an old USB music player recently; hooking it up to her Windows laptop wasn’t providing a lot of joy so I attached it to my Linux box and was able to see that it did identify ok, but was disconnecting randomly at times too.
Want to script querying an AD server for the current employee list and displaying who’s joined and who’s left since the last time you did so? I found that easy enough with the common Linux LDAP tools. I’m sure it’s doable under Windows too, but I’m not sure it would be quite so simple. For bonus points add graphviz into the mix for automatic organisation charts (modulo accuracy of the AD data).
This flexibility is something that helps me do my job. Sure, as I mentioned above I do have access to test boxes that I can use for this, but being able to do it on my desktop can be useful - for example if I’m offline, or on a slow network connection, or just geographically distant from my test machines so network latency is higher than I’d like.
(Also, it’s something that makes a Linux box a really great test box. I’m lucky in that I have a mix of OSes available to me for testing, but the one that I use most often is the Debian box. Much easier to get and install decent diagnosis tools for it that can give me packet level dumps, or do really odd stuff that turns out to be really useful.)
Why Linux? (Part 4: Package Management)
(This is part of a series of posts on Why Linux?)
I’ve run a number of distros in my time. I ended up on Debian near the end of 1999, and part of the drive for that was the number of packages available in one centralised location. Decent package management is a definite strength of Linux (or FreeBSD) over proprietary operating systems. It derives from the freedom aspect, but means you can end up with one source for all (or most) of your software, that’s compiled against the same set of libraries, with one way to track what owns what.
This may not seem like a big thing, especially if you’re a hobbyist or are coming from a Windows background. Reinstalling is often seen as a necessary regular requirement. Personally I’ve got better things to do with my time. If I want to try out a piece of software I want to be able to install it safe in the knowledge I know exactly what files it owns and where there are. And I want it to be able to tell me what other common components it needs that I might not already have. Then if I decide it’s not for me I can cleanly remove it and anything else it pulled in that I no longer need.
Don’t underestimate this. This is useful on all of my machines. I can query the version number of everything installed. I can check for updates with one command (no need for every piece of installed software to have its own updater implementation). Software can share libraries correctly rather than stashing their own private copies, meaning I get bug fixes and security updates. (Yes, sometimes authors bundle even in the Linux world. Stop it.)
I’m a developer. I tend to interact with a lot of different systems, of different types. It’s really handy to have access to a wide range of tools to help me with that, know that there’s legally no problem with me installing them, be able to do so with a single command and, should they turn out to be unsuitable, know I can cleanly remove them with another single command. This is a definite win in the work context.
Equally I’ve been a sysadmin for multiple machines at once. Being able to login to each of them and check that everything is up to date is damn handy. Being able to easily install software for customers tends to make you popular too. And being able to rebuild boxes (or build additional boxes to share load) with the same setup is a lot easier with a decent package manager too.
And, to pre-empt any responses about how a lot of this is possible under, say, Windows, yes, it is. I’ve spent some time in the past building packages for commercial deployment using Novadigm’s Radia tool. I’m aware that Windows integral package management has also got better over time. I still think dpkg/apt (or rpm/yum) is far more powerful. And, for the end user, mostly easier as well - distros are building pre-prepared packages for you, rather than you having to do it yourself like with Radia.
subscribe via RSS