Using the Granite Digital Firewire Drive Bay with Red Hat Linux

Granite Digital makes a line of Firevue IDE Smart Hot-Swap Bays allowing ordinary IDE drives to be connected to ieee1394 (Firewire) ports. We see 35 megabytes per second sustained transfer rates - quite satisfactory for us. Firewire also allows hot-plugging under Linux, which an IDE interface does not. (USB is another hotplug possibility).

While Granite-Digital claim to support only Windows and Mac, it is quite possible to use the bays with Linux. Standard Linux distributions seem to include firewire support as part of the SCSI subsystem, at least we found it in RH 9 and Suse 8.2 even though no Firewire drives were present during the OS installation. We believe any distribution based on the 2.4 kernel is likely to include some support for IEEE1394 (Firewire) drives. Depending on how throughly the kernel has been patched, it is likely that the drives will not be spontaneously "registered" (made available at the hardware level). This was generally the case with our Red Hat 9 installation, although very rarely the drives would "take" - just often enough to be frustrating. The cure is relatively simple.

After connecting the drive bay to the firewire port of your Linux box and rebooting the computer with a drive in the bay, you will likely be prompted by Red Hat's Kudzu program to allow the installation of a driver for the new hardware. Allow the installation and proceed with booting.

If you insert a drive and turn the key, you will see a console message:

ieee1394: Selfid completion called outside of bus reset!

In spite of the "bang" this is apparently not an error message. Once booted and logged in as super user, you can check which SBP-2/SCSI (IEEE1394) devices are currently registered ("registration" makes a device available at an entry in /dev):

cat /proc/scsi/scsi

If yours is listed (probably as /dev/sda) then you don't need this page - you can now partition or mount the drive. If it isn't, simply issue the following command (as root):

echo "scsi add-single-device 0 0 0 0" >/proc/scsi/scsi

This should generate a log entry in /var/log/messages and will complain that new disks are "not a valid block device". Before physically removing the drive, use the following command to keep Linux from hanging:

echo "scsi remove-single-device 0 0 0 0" >/proc/scsi

I learned about these commands from http://linux1394.org/sbp2.html and it was the first time writing into /proc for me too. There is a shell script at that site (rescan-scsi-bus.sh) that does the add or remove and figures out the correct arguments, but for some reason it is not part of the RH9 default installation. To use the script run:

rescan-scsi-bus.sh

after plugging in the drive and turning on the bay. After umounting a drive, and removing it from the bay issue the following command:

rescan-scsi-bus.sh -r

If you don't logically remove the drive this way, and the OS doesn't handle this (RH9 appears not to), then the system will hang the next time you insert a different drive. As far as I can tell, however, you can reinsert the same drive with no adverse consequences.

If your drive is new, the following are easy commands to make it one big ext3 partition (assuming the hardware address is /dev/sda)

sfdisk /dev/sda mkfs.ext3 /dev/sda1 mount /dev/sda1 /mnt

If you have any other SCSI devices, or devices using SCSI emulation the drive won't be /dev/sda, so you need to be a bit careful before doing this. The sfdisk command will ask a lot of questions, but the default answer is all I ever require because I always want one big partition. ext3 is desirable because it avoids fsck, which would be very slow on a large drive.

A very informative message in response to this page was posted to the i1394 mailing list, and is reproduced here

LBA-48 support

We have several Granite Digital drive bays and trays, with WDC 200 and Maxline II (300 gbyte) drives. For several months I was fairly confident that they were working fine, but eventually the large drives started to fail e2fsck with hundreds of errors.

For testing, I did a default minimal install of RH9 from the store-bought CD, ran sfdisk to make a single partition on the 300 gigabyte drive in the GD drive bay, and mkfs.ext3 to make a filesystem. Then I copied /usr (less than a gig) onto the drive. This ran rapidly and without error messages. A cursory examination suggested that the copy was accurate, but when I ran e2fsck errors were found.

For a second trial, I ran RH up2date and repartitioned, formatted, etc. At the last step e2fsck again found many errors. For the third and fourth trials, I did a default install of Suse 8.2, repartitioned etc. Again many errors, and these persisted after a SUSE update.

At that point we obtained a Firewire PCI card from Granite Digital and repeated the updated RH tests, with no improvement.

Red Hat uses ohci 1.1 and SBP2 rev 709. The updated Suse installation uses version SBP2 version 799).

With the RH9 up2date'd system I also tested other drives mounted in other GD trays, and swapped drive bays and cables also. The one constant seemed to be that the smaller drives worked, but the 200 and 300 gigabyte drives did not.

I have to conclude that you do not want to go above 137 gigabytes (128 gig binary) with the linux/GD combination, although this does seem to work in Windows and in MAC OS-X (both of which we also tested).

(Note added June 13, 2004) I have just "upgraded" our system to Red Hat Enterprise Linux 3.0, and find that it has no support whatsover for firewire hard drives. We were able to find some information about adding support at Dellwhich seems to work. A recently obtained Fedora does have similar support to RH9.

Daniel Feenberg
feenberg isat nber dotte org
24 October 2003