Cloning a VMWare Server VM

I recently had a need to make a bunch of clones of a vmware virtual image on my vmware server. After doing a few by hand, I got tired of it and wrote a little script to do it for me. The script assumes that there’s a working set of virtual image files in a directory named “vm-template” and that the virtual machine name defined in the template is also “vm-template.” You can change these values by changing the SOURCE_DIR and SOURCE_NAME variables at the top of the file (or you could modify the script to set these variables using passed arguments). Whatever image you wind up using to do the cloning, you should make sure it isn’t running in order to avoid unpredictable results when doing the copy.

To use the script, just run it, passing the desired new virtual image name as an argument. A directory will be created using that name (so avoid spaces or weird characters; escaping them might also be ok). Files from the image to be cloned will then be copied into the new directory. The SOURCE_NAME value in the source image’s .vmx file will be replaced with the name you pass as an argument, and all files will be renamed to use the argument’s name rather than the SOURCE_DIR name. To clarify: Typically, your source image will live in a directory named (for example) “vm-template” and will be full of files named (for example) “vm-template.vmx,” “vm-template.vmdk,” etc. The script renames any such matching files to use the argument passed, and it changes references to those names within the .vmx file to point to the renamed file.

If your source image is large, it could take a few minutes to copy the files. The rest of the process goes quickly. Once you’re done, if you’re using vmware server, you’ll want to pick the option to add a new machine to the inventory. Then browse to the new file. When you boot it up, you should see (I’m using the web console here) a warning asking you whether you copied or moved the image. This is because we didn’t do anything to change the guid that identifies the image. Tell vmware server that you copied it, and it should make any necessary adjustments and boot your image.

Once it’s booted, you’ll need to make adjustments such as changing the hostname and applying any patches or updates that may have landed since you created the source image.

The script I used follows.

#!/bin/bash

SOURCE_DIR="vm-template"
SOURCE_NAME="vm-template"

DEST_NAME=$1

if [ -z $DEST_NAME ]; then
    echo
    echo "Please specify a VM name as the sole argument"
    echo
    exit 1
fi

if [ -e $DEST_NAME ]; then
    echo
    echo "$DEST_NAME already exists. Please specify another."
    echo
    exit 2
fi

echo

mkdir $DEST_NAME
echo "Copying source files to $DEST_NAME directory"
echo
cp -R $SOURCE_DIR/* $DEST_NAME/

cd $DEST_NAME
for file in `ls |grep $SOURCE_NAME`; do
    new=`echo $file | sed 's/'$SOURCE_NAME'/'$DEST_NAME'/'`
    mv $file $new
done

perl -pi -e 's/'$SOURCE_NAME'/'$DEST_NAME'/g' *.vmx
rm -rf *.lck

Happy B.C. Day

It turns out that I’m in Victoria, British Columbia for work this week. When I planned the trip (at the last minute), I didn’t know that Monday was a Canadian holiday. As one of my American coworkers is here as well, it’s really not that big a deal; we’ll work in peace without all the usual water cooler chatter about moose and that most riveting sport curling (ok, so hockey is probably the more obvious sport choice, but curling dominated the TV when I was last in town).

I learned today that B.C. Day isn’t just a laze-around-the-house holiday but is one that Victoria, at least, does up right. Tonight, the symphony was playing from a barge in the harbor, and tomorrow, Vancouver resident (or native?) Sarah Machlachlan is the headliner for a free concert. I was tired this evening (still on Eastern time) but had thought about walking down to the harbor to hear the symphony. When I read online that they traditionally play the 1812 Overture as part of the event, the deal was sealed (it’s a rousing tune that always makes my barnacled old ticker stir a little). So walk down there I did, and I really enjoyed it.

It’s just a block from my hotel down to the harbor area. If you follow the road around the perimeter of the harbor, you pass The Empress, a venerable old (I think very old) hotel with walls covered in something green (ivy or something of that ilk). I think a coworker told me during my last visit that the hotel is slowly sinking into the ground. Then you keep walking around until you get to the Parliament building, which is hard to miss because it is outlined in white lights (like some houses at Christmas). The barge or whatever the symphony was arranged on was basically a straight shot out the front door of the Parliament. There were people lining the streets, sitting on the harbor shore, and filling the lawns of those two neat old buildings.

By the time I got down there, it was starting to get a little dusky, and it was neat to watch the tangle of boat masts swaying in the breeze (I won’t be so fanciful as to suggest that they were dancing to the music, though for a moment I was tempted). The weird thing is that as I approached, the song I heard was “Home on the Range,” and it was followed by “Clementine” and then by a song I recognized and that I heard them announce with “Appalachian” in the title. These all seemed distinctly American to me, so I amused myself for a minute thinking about what a nice welcome party Victoria had thrown me.

I wandered around listening to the music and watching people and generally enjoying the atmosphere. People were dancing to the music and smiling unself-consciously and really having fun, and it was fun to be part of it.

The rendition of the 1812 Overture wasn’t the best I’ve ever heard, but then half the sound was no doubt carried away by the 100 yards between the symphony and me, so maybe I’m not being fair. Toward the end of the song, when cannonfire is appropriate, fireworks were shot off in the harbor, and the pealing of bells could be heard from a belltower sort of between The Empress and the Parliament, and that was a neat addition.

As the concert wrapped up, people started heading back toward downtown (where my hotel is), and I floated with them. A drum and bagpipe group (complete with kilts and big furry hats) was marching in the street and started playing Amazing Grace as I walked away. Just a couple of blocks down from my hotel on the main drag, a percussion group had set up and was playing some really neat, lively stuff that had more people dancing in the street.

It was a neat night, well worth venturing out in spite of my tiredness, and I can’t help hoping that some of tomorrow’s festivities are evening ones so that I can attend.

Stage

For a long time at my day job, one of our big web site issues has been the staging of database-driven content. Particularly if you’re editing Drupal pages that have a lot of markup in them, publishing a node can be sort of scary, as it goes live instantly with any bugs you’ve introduced. In theory, Drupal’s preview feature can be used to view your changes before you commit to them, but this too is scary, as the content isn’t rendered exactly as it will be once published. Further, using vanilla Drupal with its preview function to stage content requires that you roll out changes one by one. If you want to group changes for a mass rollout, the best you can do is wrap your changes in html comments and uncomment them one by one during deployment, hoping you don’t fat-finger anything in the process. I’ve always thought this would be a pretty difficult problem to solve, but yesterday, I came up with what feels like a satisfactory method for staging content.

The new stage module addresses both safety-netted staging of individual content and management of change sets.

It works by tapping into Drupal’s revision system, which already allows you to track changes to content over time and to revert to older content. For specified types of content, any additions or edits are published using the normal Drupal workflow, but on publish, the revision number is pinned at its last blessed point. You can edit or add any number of documents, and they all remain pinned at their pre-edit revision until you roll the whole batch of changes forward. When you roll a batch forward, all the revision numbers are brought to their most recent and pinned there until the next deployment. In the administration section, you identify staging and production servers. If you view an affected node from one of the specified staging hosts, you see the latest copy; if you view it from a production host, you see the pinned version.

This workflow is ideal for environments in which fairly frequent milestones are deployed. Because of Drupal’s handy dandy revision system, you can compare versions of the content across pushes to see what’s changed.

The module is hot off the presses this morning and so is probably still buggy and feature-poor, but it’s a start.

Flock Eco Edition

Word seems to be getting out that we’re releasing an eco edition of the Flock browser for Earth Day. I haven’t tried it myself yet (we’re doing final QA on the build to make sure it’s in good enough shape to release), but I do know that it comes with all sorts of green-related links and feeds built in (it’s not clear to me whether these will be dumped in along with your existing ones if you’re already a Flock user; back up your profile first just in case) and that it has a green theme (complete with a recycle button in place of the reload button, which is kind of nifty).

Flock makes money when people use the search widgets built into the browser to search through Yahoo, and we’re opting to donate 10% of the money we make through this edition of the browser to some green cause (to be determined later by user voting). It kind of makes me think of the free rice game: Play a fun little game and give rice to starving people just by playing. Keep your search engine set to Yahoo and use our product to actually do your searching and save trees at the same time. Who knows? It could be your search for Paris Hilton that enables an ecologist to rescue a baby panda from the clutches of a poacher bent on selling its organs to a far eastern natural medicine dealer.

Travel Sounds

This is the sound me trying to swipe the wrong part of my virgin passport through the bar code reader (to be fair, there were two barcodes).

This is the sound of me negotiating the single-row seat with a couple split across the aisle of my small plane out of Knoxville.

This is the sound of me sprinting a half mile through the Houston airport to try to make a connection I’m sure I’ve missed (luckily, it was delayed by a few minutes so I made it).

I make no sound on the flight from Houston to Seattle because I’m wedged in the middle seat and have to sit upright and still as a statue for 5 hours to keep from bothering my neighboring passengers.

At first, there is no sound at midnight in Seattle. Then there’s a periodic annoying cell phone ring. Then the sound of a janitor unfurling garbage bags. A gaggle of would-be passengers cheers when their tardy plane arrives, and they commiserate good-naturedly when they learn there’s icy fog at their destination and they may have to turn around and come back when they finally get there. Now I’m nearly alone in the airport. Two gate workers talk about a new boyfriend, and then they leave, and then I am alone, the shops long closed, my stomach gurgling. This is the sound of me crinkling open some crackers and trail mix, slitting open a vacuum-sealed spread of little beef sausages, peeling back the foil lid of a tub of parmesan cheese spread. These I got on the first leg of my trip (Continental’s pretzel upgrade, I thought, though I received the bounty only on my trip’s first leg) and thought to save for the long, shopless night in Seattle. This is the sound of my reaction to the cheese. The other things were ok. This is the sound of my finding a bench to lie down on and rolling my jacket up under my neck and looping my leg through my backpack’s straps. This is the sound of me sitting up to read instead. And finally, the sound of the nothingness of a nap.

Interrupted by the pock-pock-pock of sudden herds of flight attendants going staccato to their early gates. This is the sound of an empty airport bathroom, and the quiet of another little nap. This is the sound of more pock-pock-pocking flight attendants, and then some laughter, pairs and trios of people beginning to stream into my terminal. Morning has broken.

This is the sound of the cappuccino machines at Starbucks and of my ordering a chocolate chip muffin. This is the sound of a Mt. Dew I’ve paid to clunk from its machine, another bribe to the caffeine gods so that they may keep my eyes wide and my brainwave somewhere north of flat for the workday that starts in 4 hours.

Google Docs

I’m usually pretty leery of using online services that I don’t administer for things that matter to me. For example, I’ve resisted a number of times using Google’s calendar for work purposes because there’s potentially sensitive information being posted to the calendar. So not only do I not have control over leaks of the data, but I don’t have control over backups, uptime of the service, etc., and this seems a lot of liability for something I need to make sure I’ve got access to. (Honestly, though, I think the smart folk over at Google are probably generally more competent than I am to guarantee uptime, backups, etc. — comparative benefits packages would suggest as much, at least.)

I’m very satisfied with one aspect of Google’s online service, however, and I’m consistently able to put aside my paranoia to use Google Docs for collaboration. Now I’d never store an important sensitive prose/text document there, but for planning server maintenance, the spreadsheet application is hard to beat. You share a document with everybody who’s involved, and everybody can view and edit the document at the same time. This past weekend, I was tasked with taking another shot at setting up replication between some mysql servers. We’ve set this up in the past but have lost confidence in the validity of the replication. So a coworker and I made another go of it this weekend. In preparation, I made a punch list of our steps, from putting up downtime pages and blocking access to the database at the firewall to pasting in commands for dumping data and resetting meta-data. I was able to color-code the steps by server so that it was easy to tell at a glance on what hardware to perform a step. And then as we went through the steps, we could update columns describing who performed a step and when. Of course, we’re coordinating this in a chat window as we’re doing the work, but it’s neat to watch the spreadsheet being updated interactively as we go, and this method provides a really simple, nice way to collaborate and keep a record of the process. Since the data’s not terribly sensitive (provided you don’t put passwords in), hosting it elsewhere doesn’t give me the heebie jeebies, and it’s nice to have a centralized repository of past maintenance events to build on for future maintenance. If there were a version you could download and install on your own hardware, I’d do it in a heartbeat and even use the apps for sensitive data, but then how would Google watch your every move and deliver search results based on the documents you create?

Blogged with the Flock Browser

Flock

For nearly three years now, I’ve worked for a company called Flock. For nearly three years, we’ve been working toward releasing a 1.0 version of our product. And yesterday, we finally did it, amid much less fanfare than I might have expected (not even a company blog post). Starting as far back as version 0.5 just under two years ago, I’ve been using Flock as my primary web browser (that’s what we make, a web browser built on the same platform that drives Firefox), so I’ve been around to see all the changes the product has gone through.

Our first public beta was released to much hype with subsequent fizzle. It had a neat skin, a photo viewer/uploader, a rudimentary blog authoring tool, and something we called the shelf, and that was it, besides the basic browser functions. Although we had many early enthusiasts (some of whom are still with us), reactions tended to be along the lines of “this is what the hubbub is all about?”

In June 2006, we released version 0.7 of the browser and saw lots of downloads and a lot of press (I worked 20-hour days for a week to keep the new web site from dying under the strain of our traffic). We were thinking at the time that we’d have a 1.0 version by the end of the year, but change was in the air, and after some executive turnover, the end of the year had come and we didn’t have a 1.0. In the first couple of months of this year, I feel like we really hit our stride and started executing. We pushed a 0.9 version with subsequent updates that got tolerable reviews, and our 1.0 beta releases over the past few weeks have been met with the customary skepticism, but for the first time, a lot of that skepticism is beginning to turn over. People are posting that though they found our product either not compelling or too buggy in the past, they’re loving it now. And plenty of newcomers are saying that they’re addicted.

I’m going to do a little sidebar here on the social web. I’ve always been pretty cold to it. What need do I have to send to Twitter every half hour an update about what I’m doing, or to read in real-time that my social-web-addicted buddies are going out for coffee or sitting through a dull meeting? Do I really want to read another “20 Questions” type post on MySpace? Basically, I don’t often have time or the compulsion to fool around on social networking sites. I spend my day working on the computer and so don’t typically like to spend much time playing on it. A year or so ago, I signed up with MySpace and Facebook basically because my work compelled me to. It was another way for Flock employees to consume our own dogfood, so to speak, and to network with users of these sites who were interested in Flock. But there wasn’t much personal value to me in signing up on these sites. I had a profile but I didn’t use the sites with any regularity.

The latest version of Flock has changed this because it brings the social web to me. The nifty services sidebar notifies me when I have new messages or pokes in Facebook, and it lets me drag content from the web to friends’ avatars to share it with them. I can find individual friends within my network more easily than by using Facebook itself because I can type part of a name in a textbox embedded in my browser to filter my friend list. I can see updated statuses easily, and an icon lights up for friends who have uploaded new media. When I click a person’s media icon, a media bar appears and is populated with thumbnails of their media that I can scan at a glance, clicking through to actually view only the things that interest me. Probably the best thing is that Flock tells me when there are updates so that I engage only when I have a good reason to rather than having to remember and bother to visit Facebook to look for updates. Since I’ve been using Flock 1.0, I’ve been engaging with people in my network, sending messages I wouldn’t have sent and viewing photos I wouldn’t have bothered to view. Flock 1.0 for me is like the Reader’s Digest of the social web. I’d never go out of my way to read a full-length bio of Meredith Baxter-Birney, but if I’m sitting on the can and have read all the jokes in my Reader’s Digest, I might thumb through the RD condensed interview with her, and I might even enjoy it a little.

That’s the main thing that differentiates Flock 1.0 from previous versions for me. I’ve long been a fan of the built-in feed aggregation, and it was Flock’s Flickr uploader (which also works with Piczo, Photobucket, and I believe Facebook) that prompted me a year ago to buy a Flickr Pro account. It previously hadn’t been worthwhile because, as a Linux user, I had no painless way of uploading photos in bulk. Flock also has built-in del.icio.us integration, the aforementioned shelf (now called the web clipboard, basically a little drag/drop area that lets you store dragged items for later use in blog posts), the blog editor, and all the goodness that comes with Firefox 2.0’s underlying engine.

I’m an employee of the company, of course, and so I have a vested interest in our success. But I really really do like the product and would use it for the built-in feed reader even if I weren’t an employee. (I’m not only the president of the hair club for men…) I suspect that there are plenty of people for whom Flock provides no benefit that Firefox doesn’t. If you don’t upload photos or read news feeds or belong to social networks, Flock’s probably not for you unless you just think it looks pretty. I wouldn’t necessarily recommend it for my dad, but I probably would for my sister and most of my friends. If you do do any of those things, why not give Flock a shot and let me (or our talented support staff) know what you think?

Counting leaves

I forget how it came up, but M was telling me the other day that she was trying to explain to an inquisitive neighbor what it is I do for a living. She knows I do computer stuff and that it’s most often web-related or system-admin-related, but these are still pretty amorphous things to somebody who doesn’t actually perform the tasks they entail. While raking leaves today, I was thinking about how I might have answered the question, which is a hard one for me to answer in a way that would be very meaningful to non-developers.

In a nut-shell, I call myself a web and analytics programmer, though I devote a lot of time to systems administration as well. The web part is fairly easy to explain. If you look at my company’s web site, you’re looking at my work. I don’t make the pretty pictures that compose the web site, but I take care of the parts that make it behave as it does, from sending emails to letting you post to the forums to displaying various types of content. I’m like the mechanic for the web site.

The analytics part I think can be a little harder to capture. At a very high level, I help facilitate the collection of statistics about our product and our web sites. At a lower level, I try to help coalesce these bits of data into meaningful, actionable numbers. For example, if we know that we have X users and Y monetizable actions performed in the product daily, then we can track Y divided by X on a daily basis and watch the curve to see what kind of money we’re making per user per day on average. If a given monetizable action begins to trend flat or downward, we might consider trying to make it easier to use the feature so that we make more money off of it.

The thing I’ve learned over the last year or so is that as you get more and more data, it gets really hard to do anything useful with it on demand. Imagine that each day, 100,000 users’ products phone home to check for a product update (I’m just making that number up). You know then that you have 100,000 users per day. If you want to track this over time, it only takes 10 days before you’ve got a million pieces of data to try to extract something meaningful out of. If you’re tracking more than one piece of data per user, your data volume increases at an alarming rate as your user base grows. The more data you have, typically the longer it takes to cull through it. And yet you have executives trying to make decisions based on this data who don’t want to sit and wait a long time for reports to run. The trick is to aggregate the data as it comes in, and as I was raking leaves this morning, I came up with what I think is a useful way of explaining how scale affects the ability to report and how aggregation helps. It’s easy to accept propositions about scale and aggregation abstractly, but concrete examples are often useful.

So imagine that you’re tasked with counting leaves. Further, imagine that on any given day, you might be tasked with reporting how many leaves there had been on some past day. Or more specifically, how many red leaves vs. yellow vs. orange. If you recount every time somebody asks you, it’ll take more time than is reasonable. The first step naturally would be to group your leaves by day (grant that this is physically possible). So on Monday, you count all the leaves and put them in a pile with a sign stuck in the ground that says “Monday: 45,031 leaves.” On Tuesday, you do the same for any other leaves that have fallen, and so on. On Friday, if somebody wants to know how many leaves you raked on Monday, you just look at the sign and tell them rather than re-counting. But what about leaf color? Well, you do the same thing, but you make a Monday pile for red leaves, a Monday pile for yellow, and a Monday pile for orange, each with a sign noting how many leaves of each color for that day. Then you add the sums and post a sign with the total for all colors for the day. If you do this as you go, then you can very quickly get back to the counts for any given day and report without having to recount. The general idea is that it’s much easier to add sums than it is to recount. The tricky part is defining in advance what sorts of information you want to know about your leaves before you ever do the counting; else you have to recount everything for all time, sorting into different piles to get counts per organizational criterion.

Being a sysadmin

My sleep is seldom affected by being one of a few people at my company who spends part of his time doing system administration, but this week has been a sure exception. We moved our whole public server infrastructure to a new section of our data center (complete with new IP addresses and routing), implemented load balancing of two separate clusters of web front-end machines, migrated two database servers to new hardware, and set up database replication for our web-facing databases. And we did it in sort of a last-minute, pre-product-launch scramble with what shoestring planning we could cobble together, while working on other high-priority projects and with very limited down time and, as far as I can tell, very little in the way of experience among our staff with implementing any of these things in a production environment. I’m not sure it could have gone more smoothly had we planned it for three months. It’s inexplicable, really. Of course, helping to make all this happen necessitated my putting in long hours over the weekend and waking up at times like 1:00 a.m. or 4:30 a.m. before or after an otherwise full workday to minimize the impact of down time. We coordinated this with sysadmins in Germany, California, and Tennessee and a data center in Texas. With my dad coming into town this weekend and a pumpkin-carving planned for tonight, I aim to take off around lunchtime (having started work at 4:30 this morning after staying up late to watch the Red Sox take the second game of the World Series) unless somebody threatens to fire me for doing so.

Linux in the 21st Century

For years now, I’ve been an avid Linux user. I (half) joke about how crummy Windows is, and I hate when I have to support Windows, though I’m really not as much of a Linux zealot as you might think. I have to confess that there’s a part of me that likes knowing how to do arcane things that lots of other people don’t know how to do. See all that text scrolling by in my simple terminal window? That’s me installing software, bucko. No graphical installers with smiling paperclips for me. I really do like understanding how my system works (more or less), being able to look under the hood to troubleshoot things. I like not having to understand how a registry works in order to tweak software (though I do have to know how to edit a text configuration file, which might be as scary to others as a registry is to me). But some of my old school willingness to dispense usability in favor of a dumb sense of pride and configuration simplicity is wearing off. More and more, I’m finding that there are tools it’d be nice to have that aren’t best implemented in a terminal application. Sure, I could write a program to read a text file I store meeting requests in and send me an email when I’m about to have a meeting, but that takes work and seems not terribly reliable. More and more, I’m looking for tools to handle these sorts of tasks for me, and I’m finding that I like them. I’m emerging from my self-imposed prison of command line solutions and testing out tools that just might help me work like a normal human being, and with some surprisingly good results.

One such tool is Korganizer, the KDE desktop manager’s calendar and organizer tool. In recent months, I’ve been required to attend many more meetings than in the past, and trying to keep them all straight has been a pain. I had tried using Mozilla’s Sunbird calendar program at various times in the past, and it’s a fine piece of software, but it clutters up my workspace. In addition to my mail window, my browser, my irc client, and my tabbed terminal window, I also had to have Sunbird running, and it just irritated me. So I recently tried Korganizer, which it turns out will hide in your system tray and pop up alerts reliably. I’ve been using it for a couple of months now and really have no complaints. It’s a little sluggish on my system, but not so bad that it keeps me from using it. I can tolerate a little UI lag when adding events if the trade-off is reliable notification of upcoming events, the ability to suspend or dismiss events, reasonable handling of recurrence, and a view of my day or week (or month) that lets me see at a glance what’s on my schedule. And Korganizer has all of these things. It also handles todo lists and journals, which I guess are like meeting minutes. I started using todo lists but found that having to open the app to see them made them less useful. I haven’t played with journaling. There are a bunch of buttons at the top that I haven’t done much with, though I’m sure they’re useful. The system tray utility seems to use up no appreciable resources, and that’s a big win on a system that runs dev mysql and apache servers in addition to all my desktop software. I’m sure there are things that Korganizer could do a lot better (I wish I could see our executive calendars, kept on a remote groupware server; as it is, I’m an island), but it beats holy hell out of hacking together something using text files and output from the “cal” command, and it has become a must-have tool for me.

Next up is Komodo Edit. I’ve taken comfort in the simplicity of the command line and the non-GUI text editor since I became used to editing files in pico and reading mail using pine back in college. When I began doing a lot of programming and learned a lot of the cool things you can do using the vi editor, I couldn’t imagine I’d ever go back to an IDE that would require mouse moves and menu navigation. My fingers are hard-wired to do vi commands now. I can do text replacement in my sleep (want to add a tab to the beginning of lines 23 – 47? type: “<ESC> :23,47s/^/t/”; oops, wanna undo it? just type “u”; then “:wq” to save and close), and I have trouble editing in any other way. One of my few beefs with vi has always been that it’s hard to do operations that span more than one vertical span of screen real estate. To delete a line range, you have to count lines or look for line numbers and then delete or cut. If you’re trying to move a hundred lines around, this can be a minor pain. A few years ago, I tried out ActiveState’s Komodo IDE. It’s built on top of Mozilla’s code and so is a cross-platform solution. At the time, it was very sluggish and didn’t offer much that interested me. Sure, there was code completion and syntax highlighting, but I can get the latter in vi, and the former almost always winds up irritating me more than it helps me. Plus it cost money to use the non-evaluation version. Recently, ActiveState and Komodo have been in Mozilla news. They’re starting a project to open up parts of their source, it turns out. In reading about this, I learned about Komodo Edit, which is the light-weight version of their pay-to-play editor. It’s free and pretty responsive (probably because it’s doing a lot less junk behind the scenes). Most importantly for my use, it has vi key-bindings. So I can fire up Komodo Edit, avail myself of what scrolly and selection capabilities are useful to me, and still do the weird “:23,47s/^/t/” sort of commands that my fingers are so used to. What’s more, I can define projects and view select files in a sidebar, so I do a lot less typing to navigate my file system when working on projects that require me to edit a number of files. I’ve also discovered that the find and replace helps out sometimes when there’s some regex that I can’t quite work out by hand (e.g. when I want to replace with newlines). I probably use a tiny subset of Komodo Edit’s feature set, but they’re pretty useful. I find that if I’m doing one-off edits or will be staying in one file and toggling to the command line to test (e.g. when working on a perl script to parse a log and display summary info), I do better to stay at the command line, but Komodo Edit is fast becoming not a “must have” but a solid “nice to have when I want it” tool.

My latest interest is in launchers. I never really caught on to Mac OSX’s Quicksilver launcher. Or it’s not that I didn’t get it at some level as that I didn’t see that it was a killer feature for most Mac users, who I think of as people who like to draw pretty pictures more than as people who tend to want to remember abstruse key combinations needed to make a launcher behave in useful ways. But as I find myself more and more trying to get back to documents or applications that are buried in the file system or in menus, I find myself wishing I could just type a couple of keys to pull up the apps or docs. KDE’s Katapult looks very slick and promising, but it’s geared toward KDE applications and interactions, and I can’t seem to pull myself away from the Gnome desktop manager. Although I’ve read that Katapult is easy to extend, documentation seems poor at best, and I suspect you have to drink the KDE Koolaid and know a bit about working with KDE frameworks in order to make much headway. Gnome has an app named gnome-launch-box that is sort of like Katapult, but it’s very ugly. Although you can run it without the window initially on top of other apps, I can’t figure out how to then provoke it (in Katapult, you press CTRL-space and the slick interface appears instantly). It’s pretty responsive in terms of finding and launching folders and applications, and it handles multiple matches (e.g. a list pops up displaying both Korganizer and Komodo Edit if you type “ko”) and seems to be wired for extensibility, but by the developers’ own admission, it’s just not ready for prime time yet. Ubuntu ships with a tool called Deskbar that is a sort of launcher, but it hasn’t worked very well for me so far. It’s hard to predict what results it’ll return and in what order, and though it appears to be fairly extensible, a plugin I wrote for it (actually, I just modified the bugzilla plugin to point to my bugzilla install) is quirky at best. So while I’m on the hunt for a good launcher, none of the options I’ve found to date quite cut the mustard yet.

Of course I use Flock and Thunderbird. In the next few weeks, Flock will be making a big step toward its original vision for the browser as a social tool. Thunderbird is pretty low-frills but has served my email needs very well for roughly five years now. But these apps are old news for me, so they don’t really fit into this post, which outlines a recent foray into a broader set of GUI apps. In the same category are xchat and OpenOffice.org.

So, there you have it. Back into my dork cave I go. All this time out in the land of the first-class user has instilled in me a craving for a darkened room and the glow of a terminal window flickering up at me in a chunky Courier font.

:wq