Switching from Greyhole to ZFS file system


So I didn’t originally go with zfs, instead using greyhole. However, I can’t help but want the advantages of using zfs. On Ubuntu, it is dead simple installing the zfsforlinux ppa:

sudo add-apt-repository ppa:zfs-native/stable
sudo apt-get update
sudo apt-get install ubuntu-zfs zfs-auto-snapshot

So I am going to create a degraded raidz with three drives. So I have to transfer any files currently stored on two of those three drives to the remaining disks. Greyhole handles this easily:

sudo greyhole --going=/mnt/wd2tb/gh
sudo greyhole --going=/mnt/sam15tb/gh
sudo umount /mnt/wd2tb
sudo umount /mnt/sam15tb

Make sure that you had enough room and everything went smoothly for greyhole during those processes.
If so, we can proceed to make the degraded zfs raidz with those two drives and a third empty file. We will later remove the empty file from the raid, leaving it degraded, copy over all the files, and then replace the third missing “drive” file with the real drive that currently stores the data.
This command creates a file that is supposedly 2 terabytes in size, but takes only 1G space. My /tmp is in memory (tmpfs) so it would be deleted upon reboot.

dd if=/dev/zero of=/tmp/false2tb.zfs bs=1G count=1 seek=2000

Second, you shouldn’t use the short names for zfs drives, instead the device-ids. I chose to reference those using nice names by creating a vdev alias file in /etc/zfs/vdev_id.conf with the following:

#     by-vdev
#     name     fully qualified or base name of device link
alias hit2tb    scsi-some_disk_id
alias sea15tb   scsi-other_disk_id
alias wd2tb     scsi-another_disk_id

Then we make the raidz, using the file and two of the drives that don’t hold the data. Careful here, zfs will format the drives automatically.

sudo zpool create datapool raidz wd2tb sea15tb /tmp/false2tb.zfs -f
sudo zpool status
  pool: datapool
 state: ONLINE
  scan: none requested
config:

	NAME                   STATE     READ WRITE CKSUM
	datapool               ONLINE       0     0     0
	  raidz1-0             ONLINE       0     0     0
	    wd2tb              ONLINE       0     0     0
	    sea15tb            ONLINE       0     0     0
	    /tmp/false2tb.zfs  ONLINE       0     0     0

It is made! Now we will immediately put the file “drive” offline, and check the status:

sudo zpool offline datapool /tmp/false2tb.zfs
sudo zpool status
  pool: datapool
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
	Sufficient replicas exist for the pool to continue functioning in a
	degraded state.
action: Online the device using 'zpool online' or replace the device with
	'zpool replace'.
  scan: none requested
config:

	NAME                   STATE     READ WRITE CKSUM
	datapool               DEGRADED     0     0     0
	  raidz1-0             DEGRADED     0     0     0
	    wd2tb              ONLINE       0     0     0
	    sea15tb            ONLINE       0     0     0
	    /tmp/false2tb.zfs  OFFLINE      0     0     0

I have an advanced format disk and so I want to check that zfs has the right 4k shift=12 setting:

sudo zdb | egrep 'ashift| name'
    name: 'datapool'
            ashift: 12

An ashift=12 value is what I want. If it was 9, zfs would be using 512b block size.

I want to change some settings and then make some additional sub-pools:

sudo zfs set compression=on datapool
sudo zfs create datapool/ComputerBackups
sudo zfs create datapool/Crashplan
...

The command sudo zfs list will show what was made.
Now we need to copy the files from the current location, which is /mnt/hit2tb/gh/ for me:

rsync -av --exclude=.gh* --log-file=~/Documents/copytodatapool.log /mnt/hit2tb/gh/ /datapool/

…and then wait a few hours…
Once you’ve checked that everything you want has copied over correctly, it is time to replace the empty file disk with the real remaining disk:
First I would stop greyhole. Personally I decided to not use it at all after this point. So I modified the greyhole.conf to remove all the

sudo service greyhole stop
sudo umount /mnt/hit2tb
sudo zpool replace datapool /tmp/false2tb.zfs hit2tb
sudo zpool status
  pool: datapool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
	continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Mar  9 20:39:10 2013
    91.1M scanned out of 1.73T at 11.4M/s, 44h9m to go
    28.1M resilvered, 0.01% done
config:

	NAME                     STATE     READ WRITE CKSUM
	datapool                 DEGRADED     0     0     0
	  raidz1-0               DEGRADED     0     0     0
	    wd2tb                ONLINE       0     0     0
	    sea15tb              ONLINE       0     0     0
	    replacing-2          OFFLINE      0     0     0
	      /tmp/false2tb.zfs  OFFLINE      0     0     0
	      hit2tb             ONLINE       0     0     0  (resilvering)

… and then wait for it to finish resilvering. Presto, your raidz is in place.
Now your data is secure and greyhole is no longer needed.

sudo apt-get remove greyhole
sudo apt-get autoremove

4 responses to “Switching from Greyhole to ZFS file system”

  1. Henry Armitage Avatar

    What were the advantages that finally made you move?

    1. Weston Avatar

      It’s been a while so I’d have to go back and check to have a great answer. Greyhole would be fine if I was only accessing data through Samba shares. It uses Samba to track all changes for duplication purposes. With mythtv and logitech squeeze server etc running on the same box, I didn’t want to use Samba shares for all accesses to the data. Greyhole is also just a duplication service, if one version is corrupt it may duplicate across to the other versions. There is no snapshots with Greyhole in case a version is deleted or changed accidentally.

      ZFS has the RAID efficiencies with 3+ drives, the snapshots and checksum protection built in to protect against corruption. The only downside is the need to have like sized drives for maximum efficiency and you have to wipe and reformat drives to convert to the ZFS format. I’m glad I did.

      1. Henry Armitage Avatar

        The other issue with ZFS, unfortunately a killer one for me, was that you cannot grow a pool. You can make a second (independent) pool alongside it, but growing the original by adding drives isn’t an option (certainly not an easy one). Some folks are talking about using AUFS+SnapRAID (aufs however is expected to be dropped from the linux kernel)…. I’m wondering about using Greyhole just for easy pooling, and SnapRAID for duplication…

        1. Weston Avatar

          I agree completely that growing the original pool by adding drives is a killer limitation in some circumstances. I believe this is on the list to implement in ZFS at some point, just probably not soon.

          I set up my pool with the 1.5TB and two 2TB drives, and so my pooled space was 3TB. Since I can replace the 1.5TB with a 2TB (or 3TB) drive and bump up my pool size to 4TB, I was comfortable with that set up. I left the 750GB drive out and use it for livetv recordings which I don’t care about losing. If I want to retain a TV recording, I archive it to the ZFS pool.
          While my video and photo collection continues to grow, I am currently using about half of my pool and don’t expect to fill it for several years. I would like to eventually move to a four drive pool with dual redundancy, but the checksum checking of ZFS makes dual drive failure not as likely as under standard RAID. Plus I backup anything important to crashplan and an external HD, so I’m comfortable with my redundancy in the ZFS pool.

          I also felt that greyhole was another service that I had to make sure was running correctly along with dependencies like Samba. I like that ZFS is independent of Samba or any other separate service.

Leave a Reply

Your email address will not be published.