I was testing network speeds on my TrueNAS SCALE 24 server to see how fast my local network could transfer data after installing a new 10GbE network switch. Since TrueNAS appliances disable package management through their UI (TrueNAS System Shell), I used the pre-installed `iperf3` tool on the NAS: ``` truenas_admin@truenas[~]$ iperf3 -s ----------------------------------------------------------- Server listening on 5201 (test #1) ----------------------------------------------------------- ``` After installing the same tool on my Ubuntu PC, I ran: ``` ethan@ethan-desktop-ubuntu-24:~$ iperf3 -c 192.168.1.233 ... [ 5] 0.00-10.00 sec 2.74 GBytes 2.36 Gbits/sec 32 sender [ 5] 0.00-10.00 sec 2.74 GBytes 2.35 Gbits/sec receiver iperf Done. ``` The results confirmed auto-negotiation was working correctly between my NAS's 10GbE NIC and my PC's 2.5GbE NIC, achieving approximately 2.35Gbps (~294 MB/s). However, actual file-transfer speeds from the PC to NAS were significantly slower: - **PC → NAS (upload):** ~1.2 Gbps (around 150 MB/s), roughly a 20% gain from my previous setup. - **NAS to PC download** speeds were even slower at around **0.5 Gbps (≈62 MB/s)**. I suspected another bottleneck, so just to rule things out, I tested disk performance directly on the NAS using `fio`: **Disk Read Performance:** ``` truenas_admin@truenas[~]$ sudo fio --name=read_test --directory=/mnt/pool0 --rw=read --bs=1M --size=10G --numjobs=4 --runtime=60 --group_reporting ... READ: bw=4075MiB/s (4273MB/s)... ``` The RAIDZ2 pool showed impressive read speeds (~4.2GB/s), far exceeding the network’s maximum theoretical throughput (2.35Gbps ≈ 294MB/s), confirming disks were not the issue. **Disk Write Performance:** ``` truenas_admin@truenas[~]$ sudo fio --name=write_test --directory=/mnt/pool0 --rw=write --bs=1M --size=10G --numjobs=4 --runtime=60 --group_reporting ... WRITE: bw=800MiB/s (839MB/s)... ``` The write speeds of 839MB/s were solid, typical for RAIDZ2 arrays, again showing the disks weren't the bottleneck during network writes. Again, I did not expect this was the issue. I was just ruling things out. My RAIDZ2 pool uses six Seagate EXOS X20 18TB drives in a RAIDZ2 array, each rated at 285 MB/s sustained transfer rate. The RAIDZ2 array theoretically supports reads of about three times that (about 855 MB/s), easily surpassing the network’s theoretical maximum (~294 MB/s at 2.5Gbps), but is limited to the write speed of a single drive. After further investigation, I stumbled across a TrueNAS forum discussion highlighting performance issues related to SMB shares on SCALE, especially with the "Multi-protocol" setting enabled (which mainly affected Mac users). Although this didn't directly apply to my setup (Ubuntu client, SMB-only share without multi-protocol), it prompted me to reconsider how I mounted the SMB share. I remembered I'd initially mounted the SMB share using Ubuntu's Files (Nautilus) GUI, which relies on GVfs (GNOME Virtual File System) with GIO. This method is convenient because it integrates well with Gnome and it's apps. But GVfs has significant overhead, especially in writes, mainly due to single-threaded processing, lack of kernel caching, user-space memory copying, and context switching overhead. Again, my GVfs SMB mount performance was roughly 1.2Gbps download and 0.5Gbps upload. By contrast, switching to a traditional CIFS mount dramatically improved performance to 2.35Gbps in both directions—maxing out the network. CIFS outperformed GVfs because it: 1. Mounts SMB shares directly in the Linux kernel filesystem, reducing context switching. 2. Benefits from kernel-level caching, read-ahead optimizations, and multi-threaded I/O. 3. Supports advanced SMB3 features such as multi-channel SMB3, SMB Direct (RDMA), and enhanced caching mechanisms. 4. Utilizes kernel-based network processing for lower latency and higher throughput. Overall, using CIFS allowed me to fully take advantage of my 2.5Gbps network connection, confirming GVfs was the limiting factor in my initial tests.