• AnthonyJamesKramer@Gmail.com

    Personal Email

  • Kramer71@Purdue.edu

    Academic Email

  • Instagram

Storage in Windows Server

Recently, I had to rebuild my main storage array, and in doing so, I tested a few things out. Here’s what happened:

Pretext:

I have never claimed to be the smartest person ever, but I made some really bone-headed decisions for a storage array. First off, I have been using FreeNAS since version 7.0. Its fast, resilient, and for the most part, plug and play. Back in the early days of computer experimentation, it had a web GUI. At that time, command line for something seemingly complex as ZFS was out of reach.

At Purdue armed with four HP Proliant DL380 G7s and a Rosewill case, I needed a lot of fast storage to keep the systems fed. iSCSI with PXE was fantastic to keep the servers running without the need for their own HDD. Unknown to me, FreeNAS was accumulating errors overtime due to some pretty stupid mistakes. The Proliant’s internal SAS connector was driven by a P410, which while fast, does not support JBOD. In my infinite wisdom, I made seven raid-0 VDisks. That is a really bad idea, as I have found out. Last week, when checking the health of the pool (for the first time in months…), this was what I saw:

All seven drives showing catastrophic errors

Interesting to note: All of the data errors were not caused by the disks failing, but rather ZFS itself. The Seagate Constellation ES.3 drives are fantastic and I would personally recommend them to anyone. However, occasional power loss and the raid configuration caused some severe data loss over time.

One panic attack later, and I acquired two Seagate Ironwolf 8TB NAS drives. Take it from a guy who deals with data recovery bi-monthly, do not skimp on drives! Used drives are not a thing you want to deal with! I cannot recommend them yet, as I’ve only had them for a week, but they’re still going strong.

Main Testing:

Testing before assuming is something I try to do, but even I couldn’t hold back from assuming Windows’ storage options are terrible. I gave the benefit of the doubt. Testing was simple, find the storage system with the best disk performance and *some* redundancy. I could care less about the file system type (ReFS vs. ZFS vs. NTFS vs. EXT4) as long as its fast. Here’s my exact setup:

  • 2x Intel SCU SAS Controller (Onboard)
    • 7x Seagate Constellation ES.3 3TB
  • 6x Intel C600 SATA AHCI Controller (Onboard)
    • 1x PNY C900 SSD (Boot)
    • 2x Seagate Ironwolf NAS 8TB
  • C600 Chipset PCIe
    • Samsung 960 EVO NVMe (VMStorage)

I was willing to throw all seven drives and the NVMe SSD at anything possible.

Windows Storage Spaces:

The first thing I tried to do was a seven disk parity (Raid-5ish) drive with an NVMe tier cache. I do not have Windows Server Datacenter, so that failed.

Next I tried a seven disk simple (Raid-0ish) drive with an NVMe tier cache. It was a hassle, but I got it setup… I did not even bother testing it, because having seven drives in Raid-0 is suicide.

Then I tried a seven disk parity drive with no cache. Pretty simple to setup, but performance was awful. The max write speed was 32MBps. Unacceptable.

Windows Computer Management:

Giving up on storage spaces, the next option was to try to make a drive directly from disk management.

A *slight* problem immediately occurred in that the SCU controller does not support dynamic disks. So raid-5 and raid-6 were impossible.

I tried a simple drive just to sanity check, which got me a max write of 75MBps. Again, simply abysmal.

FreeNAS Virtual Machine:

Having confirmed that software multi-disk setups in Windows are still shite, I moved back to FreeNAS.

Throwing all the drives at a VM was easy enough:

Adding physical disks to a VM in Hyper-V

I tested a range of possibilities with different core counts, RAM sizes, with and without the NVMe drive as cache, and with parity Z1, Z2, and Z3. This is what I settled with:

  • Core Count: 8
  • RAM Size: 16384MB (16GB)
  • No NVMe cache (It didn’t help enough to justify)
  • Parity Z1 (I’m confident enough in the drives)

The performance increase speaks for itself:

Large continuous write onto the array. Despite being larger than a 16GB file, the write never slowed down.

I think its fair to say that the fact that a freeware VM inside of Hyper-V easily beats out a bare metal implantation on Windows Server is actually disappointing. Long live ZFS, I guess.

Update:

I was having some trouble with FreeNAS losing all the drives randomly, but that turned out to be a Windows driver error. Updating the SAS controller made the system significantly more stable.