I chose not to run everything in VMs because it wastes resources on multiple kernel instances and virtual hardware emulation when containers could share a single kernel and give you near-native performance for most services. That means I can run more apps with the same hardware. Direct hardware access is especially important especially for TrueNAS SCALE for ZFS's self-healing and monitoring. Otherwise, the hypervisor presents virtual disks to TrueNAS, and ZFS doesn't see real hardware. You CAN virtualize TrueNAS with direct hardware access, but passing ANY PCI/PCIe device directly to a VM requires IOMMU (a feature not all motherboards have) and more hardware (SATA HBA controller, .e.g., LSI 9211-8i). Although that is the "proper" way to virtualize TrueNAS, it still introduces overhead and potential hypervisor bugs. You need a free PCIe slot to fit that HBA, though. My mobo had 2 slots, so I could fit 2/3: GPU, NIC, HBA (as if I needed more reasons). I opted to keep the primary function of the machine simpler. Plus I get ML inference with a GPU and 10GbE speeds. On the other hand, VMs provide true isolation where a security breach or misconfiguration in one service can't possibly affect others. But the pros did not outweigh the cons for me.