I have never claimed to be the smartest person ever, but I made some really bone-headed decisions for a storage array. First off, I have been using FreeNAS since version 7.0. Its fast, resilient, and for the most part, plug and play. Back in the early days of computer experimentation, it had a web GUI. At that time, command line for something seemingly complex as ZFS was out of reach.
At Purdue armed with four HP Proliant DL380 G7s and a Rosewill case, I needed a lot of fast storage to keep the systems fed. iSCSI with PXE was fantastic to keep the servers running without the need for their own HDD. Unknown to me, FreeNAS was accumulating errors overtime due to some pretty stupid mistakes. The Proliant’s internal SAS connector was driven by a P410, which while fast, does not support JBOD. In my infinite wisdom, I made seven raid-0 VDisks. That is a really bad idea, as I have found out. Last week, when checking the health of the pool (for the first time in months…), this was what I saw:
Interesting to note: All of the data errors were not caused by the disks failing, but rather ZFS itself. The Seagate Constellation ES.3 drives are fantastic and I would personally recommend them to anyone. However, occasional power loss and the raid configuration caused some severe data loss over time.
One panic attack later, and I acquired two Seagate Ironwolf 8TB NAS drives. Take it from a guy who deals with data recovery bi-monthly, do not skimp on drives! Used drives are not a thing you want to deal with! I cannot recommend them yet, as I’ve only had them for a week, but they’re still going strong.
Testing before assuming is something I try to do, but even I couldn’t hold back from assuming Windows’ storage options are terrible. I gave the benefit of the doubt. Testing was simple, find the storage system with the best disk performance and *some* redundancy. I could care less about the file system type (ReFS vs. ZFS vs. NTFS vs. EXT4) as long as its fast. Here’s my exact setup:
I was willing to throw all seven drives and the NVMe SSD at anything possible.
The first thing I tried to do was a seven disk parity (Raid-5ish) drive with an NVMe tier cache. I do not have Windows Server Datacenter, so that failed.
Next I tried a seven disk simple (Raid-0ish) drive with an NVMe tier cache. It was a hassle, but I got it setup… I did not even bother testing it, because having seven drives in Raid-0 is suicide.
Then I tried a seven disk parity drive with no cache. Pretty simple to setup, but performance was awful. The max write speed was 32MBps. Unacceptable.
Giving up on storage spaces, the next option was to try to make a drive directly from disk management.
A *slight* problem immediately occurred in that the SCU controller does not support dynamic disks. So raid-5 and raid-6 were impossible.
I tried a simple drive just to sanity check, which got me a max write of 75MBps. Again, simply abysmal.
Having confirmed that software multi-disk setups in Windows are still shite, I moved back to FreeNAS.
Throwing all the drives at a VM was easy enough:
I tested a range of possibilities with different core counts, RAM sizes, with and without the NVMe drive as cache, and with parity Z1, Z2, and Z3. This is what I settled with:
The performance increase speaks for itself:
I think its fair to say that the fact that a freeware VM inside of Hyper-V easily beats out a bare metal implantation on Windows Server is actually disappointing. Long live ZFS, I guess.
I was having some trouble with FreeNAS losing all the drives randomly, but that turned out to be a Windows driver error. Updating the SAS controller made the system significantly more stable.