Replacing a ZFS RAIDz drive


I have a 3 drive RAIDz pool consisting of two 2TB drives and one 1.5TB drive. The 1.5TB drive has been giving a few errors at times, and so I purchased a replacement 3TB WD Red drive.

The following commands are how to replace a bad drive in a ZFS pool.

1. Get root and check status of pool:

~$ sudo su
# zpool status datapool
pool: datapool
state: ONLINE
scan: resilvered 1.25M in 0h0m with 0 errors on Thu Sep 19 12:31:41 2013
config:

NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
wd2tb ONLINE 0 0 0
sea15tb ONLINE 0 0 0
hit2tb ONLINE 0 0 0

2. Take drive that will be removed offline:

# zpool offline datapool sea15tb

3. Check status of pool:

# zpool status datapool
pool: datapool
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: resilvered 1.25M in 0h0m with 0 errors on Thu Sep 19 12:31:41 2013
config:

NAME STATE READ WRITE CKSUM
datapool DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
wd2tb ONLINE 0 0 0
sea15tb OFFLINE 0 0 0
hit2tb ONLINE 0 0 0

4. Remove bad hard drive, replace with good new drive.

My system is hotswappable, so this is simple enough even without shutting down.

5. Add new drive to /etc/zfs/vdev_id.conf in order to reference it more easily.

# ls /dev/disk/by-id/

find the base id of your new drive there, and then add it to the conf file replacing ‘scsi-some_disk_id’ with your specific address.

# echo "alias wd3tb_1 scsi-some_disk_id" >> /etc/zfs/vdev_id.conf

6. Have ZFS replace the old drive with the new in the pool.

# zpool replace -o ashift=12 datapool sea15tb wd3tb_1

In my case, the new drive had not been formatted, which apparently resulted in me receiving the following warning:

invalid vdev specification
use '-f' to override the following errors:
/dev/disk/by-vdev/wd3tb_1 does not contain an EFI label but it may contain partition
information in the MBR.

6a. Format new drive with GPT

To overcome the error without forcing the override, we must format new drive with gdisk to give it GPT (GUID Partition Table). Replace the ‘X’ in the command below with the letter that corresponds to your new drive, which you can figure out several ways, for example ‘fdisk -l’ will list all drives and their names.

# gdisk /dev/sdX

Partition table scan:
MBR: not present
BSD: not present
APM: not present
GPT: not present

Creating new GPT entries.
command:

The command ‘o’ creates a new GPT, then the command ‘w’ writes it to disk. Make sure you are referencing the correct /dev/sdX, since it deletes all data on the drive.

6b. Have ZFS try to replace the drive (same command as 6).

# zpool replace -o ashift=12 datapool sea15tb wd3tb_1

There is no output, it just starts the process silently.

7. Check replacement status

To check the status run the zpool status command.

# zpool status datapool
pool: datapool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Mon Apr 28 17:18:09 2014
190M scanned out of 2.91T at 120.7M/s, 6h3m to go
62.2M resilvered, 0.01% done
config:

NAME STATE READ WRITE CKSUM
datapool DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
wd2tb ONLINE 0 0 0
replacing-1 OFFLINE 0 0 0
sea15tb OFFLINE 0 0 0
wd3tb_1 ONLINE 0 0 0 (resilvering)
hit2tb ONLINE 0 0 0

errors: No known data errors

8. Expand pool size, if applicable.

Since the drive I replaced was smaller than the others, and I replaced it with a larger drive, I can now expand the size of the pool by exporting and then importing the pool after the replacement finishes.

# zpool export datapool
# zpool import datapool
# zfs list

Now you see the extra space!

Leave a Reply

Your email address will not be published.