Tuesday, September 11, 2007

iTunes over NFS: A Failed Experiment

I recently wrote about my home network and music library issues. To review, I have a headless Mac mini that lives under my bed. It holds our 20GB music library. It serves music to the Roku and various Macs around the house (via shared iTunes libraries). It is on 24/7, so it always ready to serve music. This solution mostly works, but the problem is that interfacing with the Mac mini (via Chicken of the VNC) is a pain since it is just a headless server. For scanning CDs and syncing an iPod it is not a workable solution.

As a workaround, I set up an NFS mount, as described here, between the iTunes Music folder on the laptops and the iTunes Music directory on the Mac mini. My aim was to solve the CD scanning and iPod syncing problem by fooling iTunes on my laptop into thinking that it was dealing with a local file system when in reality it was accessing the server Music directory. For a while, this harebrained solution actually seemed to work. I could play music with no apparent lag. Syncing the iPod was slow, but tolerable. Things quickly went downhill, however. To have this solution work, I had to muck with the Music folder file permissions, which iTunes and the Mac mini did not seem to like. (Maybe the NFS mount was set up wrong with respect to permissions, but ultimately this was the least of my problems.) Strange temp and "Damaged" files starting showing up in the Music directory in large numbers. It would not function reliably, with the iTunes library dropping out and the file permissions confusing iTunes.

In retrospect my strategy was doomed to fail. I was expecting, without realizing it, to have the iTunes file system function as a transactional music server able to handle several client iTunes. iTunes is not a music server in that sense, capable of handling several clients at once. There are perhaps other workarounds, such as making sure only one iTunes client is accessing the library at once, but that is getting too kludgy. (I know iTunes can view other iTunes on the same network via iTunes shared libraries, but, again, I want to scan CDs and sync an iPod to a remote library.)

In the end, I copied all 20GBs of music over to one of our laptops. We now use that computer for syncing iPods, and unfortunately I am stuck with having to maintain two music libraries at once; one for serving music, and another for syncing the iPod and scanning CDs. It was a frustrating experience, but the last thing I did was listen to some Brian Eno and Nick Drake, which reminded me how much I like my music collection, and put me in a good mood again.

On a related note, Apple recently released a WiFi iPod. If this device could connect to Internet radio (which as far as I can tell, it can't), this would be the ultimate solution for me, coupled with something like this.

Saturday, September 1, 2007

OSS, You Get What You Pay For.

Like many developers, I have mixed feelings about open source software (OSS). It is a nice idea to leverage (and even, in the spirit of OSS, contribute back) to other people's work and harness their code to achieve your goals. This strategy has led to some phenomenal success stories. Several software companies that recently experienced meteoric success (such as Google and YouTube) use open source software to run their core business processes. Evidently, some flavors of OSS work in real economic terms.

This fact, however, is somewhat ironic because a lot of open source software just plain sucks. This paradox was brought to my attention recently when I was migrating from Tomcat 5.5 to 6.

For one thing, the directory structure was reorganized so I did not know where the external jars (e.g. mysql and log4j) lived. In the end, figured out where to place the jars as described here. This portion of the upgrade was not too painful. The logic to these directory structure changes is unclear to me.

Incorporating log4j was the most unpleasant experience. I don't know about you, but when I read crummy documentation with spelling and grammar mistakes (e.g. "van be very", "teh Tomcat" ) and ambiguous directions, it does not give me much confidence about the internals of the application. With that thought in mind, I downloaded the Tomcat source code. It took me less than a minute to find code that looked like this:

try {
//Code, code, code
catch (Throwable t) {
return (null);

This is a good example of how not to write Java. To see why, consult Chapter 8 from "Effective Java" (items 39-47). In addition, the Java language Specification (JLS 11.2.4) states that catching Errors (Throwable is the parent of Exception and Error) is not recommended.

Moreover, it is conventional Java wisdom to not swallow exceptions. Technically, here, the exception was not totally swallowed, but it certainly lost much of its information. The details about what exactly went wrong and where are lost forever. All the caller receives is a measly null. (What's up with those superfluous parentheses around the null?)

With respect to proper exception handling, I would add, catch the most specific exception possible and try to handle the problem in an intelligent fashion. Clean up resources, log errors, notify the user in a friendly way that something went wrong and do stuff that is appropriate to your application logic where something unexpected has happened.

If you can't recover from the exception (which is often the case), rethrow it or better yet, throw a new (possibly runtime) exception that is appropriate to the level of abstraction at that point in the code (EJ Item 43). Only throw checked exceptions if the caller can actually recover from the problem (EJ Item 41).

So why is it that Tomcat is so popular in spite of poor coding standards? Because, to my great amazement, it works! I have been using it for some time, relatively problem free. I am not pushing it to its limits. If I were, maybe I would start seeing unusual problems -- probably obfuscated by the exception handling mechanism.

The only explanation I can think of for this sad state of affairs is that with so large a userbase, many of the bugs have been eliminated through a brute force approach of trial and error programming. I find trial and error coding particularly odious, but I guess if you have enough testers out there, OSS developers can create an application that is relatively stable despite terrible coding.

But if the application is ported to a new environment or something unusual goes wrong, good luck trying to find the source of any problems. Sometimes you really do get what you pay for.