I have a lot of small companies as customers with only one main server and e.g. some NAS for backup purposes. Those servers have plenty of CPU cores and RAM to be used for everything those companies need, like hosting some small VMs, their industry specific applications with MS SQL Server most likely and sharing some files. Some of those companies use Active Directory as well, which disables write caches at least on those drives it uses to store its own data on and which it does seem to do on every restart of the system.
Those companies simply can't afford additional servers to host AD only and there's not too much load on the AD itself as well. Though, some hosted services do perform better with enabled write-caches. So which workarounds exist to better fit all those different use cases on one and the same system?
Using multiple different drives
From my understanding, AD only disables write-caches on the concrete drives it uses itself to store its data. Is that really correct or does it maintain some global config instead? If it maintains per drive only, one could simply work with multiple of those and put different services on different drives.
Though, it seems to be important to really have multiple different drives, as write-caching is maintained per drive instead of e.g. per partition.
Put AD into a (HyperV-)VM
According to some sources, HyperV doesn't cache I/O:
Hyper-V host does not cache VM-to-VHDX I/O... Very easy to check with a FileMon or similar tools @ file system level: FILE_FLAG_NO_BUFFERING and FILE_FLAG_WRITE_THROUGH are both set. SCSI CBDs traveling down the storage stack (if you'll use SCSI bus analyzer like busTRACE) below will have FUA (Force Unit Access) flag set in a write commands as well.
Doesn't this mean that cache for some drive hosting such a VM can be enabled in the host and because of HyperV, AD-traffic is still forwarded uncached? In theory this would be some app-level config because of how HyperV behaves and not necessarily negatively impact other services on the same host and drive.
Official docs by MS seem to go into the same direction:
If the virtual hosting environment software correctly supports a SCSI-emulation mode that supports forced unit access (FUA), unbuffered writes performed by Active Directory in this environment are passed to the host OS. If FUA isn't supported, you must manually disable the write cache on all volumes of the guest OS that host:
RAID-controller with SSD-caches
One of my customers uses a server containing a RAID-controller which supports using SSDs as transparent cache for different logical devices maintained by that RAID-controller. That controller even has a BBU. In theory those SSDs should handle especially random I/O better than HDDs, but in the end it's exactly that kind of cache that is disabled on drive level in Windows by purpose. I'm not even sure how many written data can be cached that way without data loss with the capacity of the BBU.
Does such a setup make sense regarding data safety compared to performance or is it only a workaround to make Windows believe to have disabled write-caching while actually using it under the hood?
Re-evaluate if AD is really necessary
Not being a workaround in strict sense, but I have one customer with installed AD who currently doesn't even know why AD is installed. There are no corresponding users, people sign in locally on their client-PCs and stuff like that. It might have simply been installed as part of some default installation of Windows or just for future use or alike and might be as easily disabled then.