Skip to main content

Brain Dump

ZFS vdev zpool for Dummies

This is a short write-up of my thought process while learning about basics of ZFS.

A short note on my use case and requirement:

  • Home network
  • Consumer hardware with currently max. 4 drives

My biggest takeaways are from the slide by TrueNAS forum user cyberjock1, which explains ZFS terminologies and some common n00b mistakes.

Terminology Topology

  • HDD(s) goes inside vdev(s)
  • vdev(s) goes inside zpool
  • zpool stores your data

ZFS n00b Pitfalls

  • For zpool with multiple vdevs, any single vdev failure will make you lose the entire zpool with no chance of recovery
  • In ZFS, disk failure is not a concern, but a vdev failure is
  • Once vdev is added to a zpool, it can not be removed

How to Ensure vdev Health

  • Redudant vdev setup, like mirror (like RAID1), raidz1 and raidz2 is highly recommended
  • Arguments for only deploying mirror vdevs2
    • Resilvering/rebuilding is fast, compared to striped parity (raidz1/2)
    • Better performance (quoting core ZFS developer)

Storage Size Extension Pitfalls

  • Adding a single drive is not possible
  • Once created, vdev can not accept new disk(s)

Options to Extend Storage

  • First make sure to have redundancy in each vdev (Mirror or raidz1/2)
  • Option 1: add new vdev, identical to the existing vdev
    • e.g.: first vdev has 3 drives in raidz1, new set of expansion should be the same number of drives, with the same vdev setup
    • Recommendation: Use 2 drives mirror, expansion would need 2 drives only
    • Con: only applicable to pro/enterprise hardware
  • Option 2: replace existing disk within vdev with bigger disk, one by one2 a.k.a. autoexpand
    • Con: resilvering new disk could take 1-2 days if vdev is in raidz1/2
    • IMO: I think I will go with this. It will happen once in 3-4 years. Let’s see how will my data grow.
  • Option 3: Move data to other device/pool, destroy and recreate zpool, and copy data back to zpool
    • Can’t imagine the stress, though
    • Reddit user suggest adding JBOD volume/pool as temporary data sink while original zpool is destroyed and recreated3
  • Option 4: wait until raidz expansion official is stable45

Now I’m asking myself this: is all the fuss and pitfalls with ZFS justified for my use case?

Some points to support ZFS:

  • Automatic scrubbing and checksumming prevent silent data corruption, this is suitable for long-term storage.
  • Live snapshot functionality

  1. https://www.truenas.com/community/threads/slideshow-explaining-vdev-zpool-zil-and-l2arc-for-noobs.7775/ — Storage Expansion options from page 22. Lots of gems for n00bs starting from page 37. ↩︎

  2. https://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/ — TL;DR on the bottom of the post ↩︎

  3. https://www.reddit.com/r/freenas/comments/di8vyu/is_extending_my_raidz2_pool_really_this/f3udhpe?context=3 ↩︎

  4. https://github.com/openzfs/zfs/pull/8853 ↩︎

  5. https://twitter.com/mahrens1/status/1338876011161690112 ↩︎