TrueNASGuide
zfs-fundamentals

ZFS Pool Design: RAIDZ vs Mirrors for a Home NAS

How to decide between RAIDZ1, RAIDZ2, and mirror vdevs for a home TrueNAS pool. Trade-offs in usable capacity, rebuild risk, IOPS, and what 'one big pool' really costs you.

By Editorial · · 8 min read

The pool you create on day one is the hardest thing to change later. You can grow a ZFS pool, but you cannot freely change its vdev topology after the fact: a pool built from RAIDZ2 vdevs is a RAIDZ2 pool, full stop. Get this layer right and the next decade of NAS administration is straightforward. Get it wrong and you are eventually destroying and rebuilding the pool.

This guide is about that decision: RAIDZ or mirrors, and which variant.

Vocabulary refresher

A ZFS pool is built out of vdevs (virtual devices). A vdev is one or more physical disks grouped in a specific redundancy topology. A pool stripes data across its vdevs. Lose a vdev, lose the pool. Redundancy lives at the vdev level, not the pool level.

The common vdev types for a home NAS:

The honest comparison

TopologyMin disksUsable capacityFault toleranceIOPS scalingRebuild stress
2-way mirror250% of raw1 disk per vdevScales with vdev count (high)Read 1 disk, fast
3-way mirror333% of raw2 disks per vdevScales with vdev count (high)Read 1 disk, fast
RAIDZ1 (4-wide)3~75%1 disk per vdevOne vdev = one disk of IOPSRead all surviving disks
RAIDZ2 (6-wide)4~67%2 disks per vdevOne vdev = one disk of IOPSRead all surviving disks
RAIDZ2 (8-wide)4~75%2 disks per vdevOne vdev = one disk of IOPSRead all surviving disks

Two things in this table matter most:

  1. RAIDZ vdevs do not scale IOPS. A six-drive RAIDZ2 vdev has roughly the random-write IOPS of a single disk. To increase random IOPS in a ZFS pool, you add more vdevs, not wider vdevs. Mirrors scale IOPS naturally because each mirror vdev contributes its own IOPS to the pool.
  2. RAIDZ resilvers stress every remaining disk in the vdev. During a resilver, ZFS reads from all surviving disks to reconstruct the missing one. On a wide RAIDZ vdev built from older drives, the resilver itself raises the probability of a second failure mid-rebuild.

How to think about the choice

Match the topology to the workload.

”I want a media library and bulk file storage”

This is the most common home NAS workload: large sequential reads and writes, low IOPS demands, capacity matters more than performance.

Use RAIDZ2. Go six-wide or eight-wide. You get ~67–75% usable capacity, survive two disk failures, and the sequential throughput is excellent. Avoid RAIDZ1 on modern multi-terabyte drives — the rebuild window is long enough that a second failure is not negligible.

”I want VMs and databases on my NAS”

VM disks (zvols, or .vmdk/.qcow2 files on a dataset) demand random IOPS. RAIDZ will be slow.

Use mirrors. Two-way mirrors give you a pool whose random IOPS scales with the number of vdevs. You lose 50% to redundancy, but you gain the responsiveness VMs and databases expect. This is the standard recommendation for a TrueNAS pool intended to host VMs or as iSCSI block storage.

”I want both”

You have two reasonable options:

  1. Two pools. One mirror-based pool for VMs/databases, one RAIDZ2 pool for bulk storage. This is what most TrueNAS users with serious VM workloads do. The downside is you commit drive bays to each, and you cannot grow one at the expense of the other.
  2. One RAIDZ2 pool with a special vdev or SLOG. Adding a fast NVMe special vdev (mirrored, always) can hold metadata and small blocks on flash, dramatically improving random read performance for things like browsing large datasets and small-file workloads. This does not fix sustained VM workloads, but for many homes it is enough.

”I want every drive bay to count”

If the bay count is small (4-bay NAS), capacity vs redundancy is brutal. Options:

Do not run RAIDZ1 with drives larger than ~10 TB. The resilver time is long enough that the second-failure risk is real.

Mistakes we see repeatedly

Mixing topologies in one pool. A pool with a RAIDZ2 vdev and a mirror vdev technically works, but ZFS will stripe across them and the pool is effectively limited by its weakest member for failure planning. Don’t do it. Pick a topology and stick to it.

Adding a single disk as a new vdev to an existing pool. ZFS will let you, and the pool will accept the disk. It is also now a pool whose redundancy is gated by a single non-redundant disk. Lose that disk and the whole pool is gone. Never add a single non-redundant disk to an existing pool except as cache or log.

Building a too-wide RAIDZ vdev. Twelve-wide RAIDZ2 looks attractive for capacity efficiency. The resilver time on a 12-disk RAIDZ2 vdev with multi-terabyte disks is measured in days, and IOPS is still that of one disk. For most home users, 6-wide or 8-wide RAIDZ2 is the sweet spot.

Treating RAIDZ as a substitute for backup. Snapshots and replication to a separate system are what protect your data. A pool that survives a single drive failure does not survive a pool corruption, a controller failure that wipes labels, an rm -rf issued against the wrong dataset, or a fire.

A concrete starting recommendation

For a typical 4–8 bay home NAS focused on media, documents, and a handful of apps:

If your primary workload is VMs or iSCSI block storage, replace that recommendation with two or three 2-way mirror vdevs.

Next steps

#zfs #raidz #mirror #pool-design #truenas

Related

Comments