https://www.truenas.com/solution-guides/#TrueNAS-PDF-zfs-storage-pool-layout/1/ https://www.truenas.com/solution-guides/#_ https://www.truenas.com/blog/zfs-pool-performance-1/ https://www.truenas.com/blog/zfs-pool-performance-2/ https://forums.truenas.com/t/i-have-to-waste-an-entire-drive-just-for-booting/1501 https://forums.truenas.com/t/hardware-buying-guide-some-tips-for-not-getting-ripped-off/1583 file:///home/ethan/Downloads/Introduction%20to%20ZFS%20R1d.pdf file:///home/ethan/Downloads/Hardware%202021%20R2a%20.pdf https://jro.io/capacity/ ZFS performs best when the number of data drives is a power of two (e.g., 2, 4, 8). This minimizes parity calculation overhead. When configuring a ZFS pool with RAIDZ on TrueNAS, selecting the appropriate RAIDZ level based on the number of drives is crucial for balancing storage efficiency, performance, and data redundancy. The main consideration is efficiency (of total pool), speed, and safety. An even number of drives is best with RAIDZ2 and an odd number of drives is best for RAIDZ3, but only up to ~12 drives. Past that, I think it's more complicated. The main consideration at that point is that the larger the pool the more likely a drive is to fail within that pool. Up to 12 disks in a single Raid-Z vDev is good > The more drives you have in a vDev, the less flexibility you have. > The more vDev you have, the more performance you get. https://www.servethehome.com/raid-calculator/raid-reliability-calculator-simple-mttdl-model/ **Additional Considerations:** - **Drive Capacity:** Larger drives increase rebuild times, elevating the risk of additional failures during resilvering. For drives 8TB and above, RAIDZ2 or RAIDZ3 is advisable to mitigate this risk. - **Performance:** Wider vdevs (more drives per vdev) can lead to higher throughput but may also result in longer rebuild times. Balancing the number of drives per vdev is essential for optimal performance and reliability. - **Future Expansion:** ZFS traditionally doesn't allow adding drives to an existing RAIDZ vdev. Expansion typically involves adding new vdevs to the pool. Plan the initial configuration with future scalability in mind. 6-wide Z2 vdevs are a good middle-ground between performance and capacity. For enhanced performance and storage efficiency, it's recommended to configure RAIDZ vdevs with a number of data drives that, when combined with the parity drives, align with powers of two. This alignment ensures that data blocks are evenly distributed across the drives, minimizing potential inefficiencies. > RAIDZ1 is basically RAID5 and should work best in multiples of 4 plus one disk for redundancy. e.g. 5x4TB would result in 4+1 disks, usable space (gross) 16TB. > RAIDZ2 is basically RAID6 and should work best in multiples of 4 plus two disks for redundancy. e.g. 10x8TB would result in 8+2 disks, usable space (gross) 64TB. > Modern ZFS installations use compression for most, (if not all), datasets, (aka file systems). Even when > the data is already compressed, like media files. Thus, there is no longer a recommendation to have a specific > count of disks in each vDev for a RAID-Zx. In the past, without compression, certain disk counts worked > better for each of the Zx. Today, there is a minimum recommended and a maximum recommended, something > like this; > > RAID-Z2 - 4 disks to 10 disks per vDev > RAID-Z3 - 6 disks to 12 disks per vDev > > I've left off RAID-Z1 as it's not recommended with larger disks, like yours, 4TB & 8TB. But, if you can live > with the risk, (if you have good backups, or can re-create the data), then RAID-Z1 will work. You can then > add a second vDev of 5 disks to increase storage. According to discussions in the TrueNAS community, certain drive counts are traditionally recommended to optimize performance and storage efficiency: - **RAIDZ1**: 3, 5, or 9 drives - **RAIDZ2**: 4, 6, or 10 drives - **RAIDZ3**: 5, 7, or 11 drives Here's a more detailed look... ### Storage Efficiency and Optimal Drive Counts for RAIDZ Configurations (2–12 Drives): | **Total Drives** | **RAIDZ1 (1 Parity Drive)** | **RAIDZ2 (2 Parity Drives)** | **RAIDZ3 (3 Parity Drives)** | **Notes** | | ---------------- | --------------------------- | ---------------------------- | ---------------------------- | -------------------------------------------------------------------------------------------- | | **2 (even)** | 50.00% (1 data) | N/A | N/A | Minimum drive count for RAIDZ1. Low efficiency, no fault tolerance beyond parity. | | **3 (odd)** | 66.67% (2 data) | N/A | N/A | Smallest odd configuration for RAIDZ1. Misaligned striping but tolerates 1 drive failure. | | **4 (even)** | 75.00% (3 data) | 50.00% (2 data) | N/A | Optimal for RAIDZ1 and RAIDZ2. Balanced striping and good fault tolerance. | | **5 (odd)** | 80.00% (4 data) | 60.00% (3 data) | 40.00% (2 data) | Acceptable for RAIDZ1. Odd count introduces minor inefficiencies for RAIDZ2 and RAIDZ3. | | **6 (even)** | 83.33% (5 data) | 66.67% (4 data) | 50.00% (3 data) | Best for RAIDZ2. Even drive count ensures balanced striping and high fault tolerance. | | **7 (odd)** | 85.71% (6 data) | 71.43% (5 data) | 57.14% (4 data) | High efficiency, but odd count slightly impacts striping. Decent fault tolerance for RAIDZ3. | | **8 (even)** | 87.50% (7 data) | 75.00% (6 data) | 62.50% (5 data) | Excellent for all levels. Balanced striping, high efficiency, and strong fault tolerance. | | **9 (odd)** | 88.89% (8 data) | 77.78% (7 data) | 66.67% (6 data) | High efficiency but odd count may slightly affect striping alignment in ZFS. | | **10 (even)** | 90.00% (9 data) | 80.00% (8 data) | 70.00% (7 data) | Optimal for larger setups. Even count ensures smooth striping and fault tolerance. | | **11 (odd)** | 90.91% (10 data) | 81.82% (9 data) | 72.73% (8 data) | Very high efficiency. Minor inefficiencies possible due to odd drive count. | | **12 (even)** | 91.67% (11 data) | 83.33% (10 data) | 75.00% (9 data) | Perfect for all RAIDZ levels. Fully balanced, optimal striping, and maximum efficiency. | These recommendations are based on aligning data blocks with the underlying storage architecture to minimize wasted space and maintain performance. However, with the advent of features like LZ4 compression, which is the default in FreeNAS 9.2.1 and higher, these specific drive counts have become less critical. Compression helps in optimizing data storage, making the strict adherence to these numbers less necessary. --- Let’s calculate the cost per usable terabyte (TB) for each recommended RAIDZ configuration, considering the price of $210 per 18TB drive with a usable capacity of 16TB due to formatting and parity overhead. | **Drives** | **RAIDZ1 Cost/TB** | **RAIDZ2 Cost/TB** | **RAIDZ3 Cost/TB** | Cost | Cost per Usable TB | | ---------- | ------------------ | ------------------ | ------------------ | ----- | ------------------ | | 3 | $21.28 | | | $630 | | | 4 | | $28.38 | | $840 | | | 5 | $17.73 | | $35.47 | $1050 | | | 6 | | $21.28 | | $1260 | 19.69 | | 7 | | | $24.83 | $1470 | | | 9 | $15.95 | | | $1680 | | | 10 | | $17.73 | | $1890 | 14.77 | | 11 | | | $19.51 | $2100 | | https://jro.io/ https://jro.io/graph/