Incidentally, I find the striped volume (Raid 0) mode in Windows 10 in the disk management more comfortable than the Raid composite on the controller directly!
The speeds are identical under mechanical hard disks!
You do not need Raid drivers anymore and you can connect the hard disks to the controller, for example, after reinstalling Windows or rebuilding the system as you like.
Windows assigns them independently.
Here are some pictures of the speed!
Even with Serial ATA 6.0 Gbps, the flagpole has already been reached!
For SSD 600-700 MB / s is in write mode end.
Only NVMe breaks all records!
Please explain the benefits of the 16.x raid controller? I am a little confused. Don’t understand 100 percent what you mean…
Write in the raid performance threads so not to go off topic. Thanks.
What you have created from within Windows, is a Software RAID and has nothing to do with the Intel RAID ROM or EFI RaidDriver modules of the BIOS. They may not even been loaded while booting.
Users, who want to create a Software RAID, do not need to use the UBU tool. It is wasted time.
Software RAID (dynamic disks) are generally not supported by WinPE Backup/Restore applications, so this technology does offer a very limited interest.
Additionally a Software RAID is not bootable.
Advantages of the Raid 16.x, you have to test yourself!
I just wanted to point out that a hardware-based raid has no speed advantage over a software raid.
Absolutely clear that the software-related Raid has nothing to do with UBU!
My system is not booted with the Raid driver because an NVMe
I have long since said goodbye to booting over the raid drivers.
I do not use this opportunity for what?
I use exclusively only the Raid0 mode, this one can secure well and without problems!
In stripe mode under Windows 10!
I only need speed when I move from A to B.
@RoadrunnerDB , @davidm71 , @100PIER :
Since the discussion about the MS Software RAID configuration and performance has nothing to do with the UBU tool, I have moved our recent discussion into the much better matching Sub-Forum “RAID Performance” and started there a new thread.
This way other users, who don’t know the differences between an Intel RAID0 array and a Software RAID0, will be able to find RoadrunnerDB’s report.
Hoping, that you agree with me…
Dieter (alias Fernando)
I was playing around with ReFS and Storage Spaces in Server 2016 recently. I like the idea of having software raid because I can define dual parity stripe and lose more disks versus raid 6 without data loss while not having to do mirror plus stripe. I have also been bitten by silent corruption in the past with hardware raid and the Storage spaces + ReFS gives a strong protection against that. I also like if I do have data loss (unlikely) that it is limited to affected disk(s) and not the entire array. A cool feature to help with performance is the tiered storage. It allows use of SSD / NVME as cache for the volume. One problem with this is it requires dedicating drives, which is really unnecessary for my use. I wonder if anyone has found a way to create a virtual file based disk with the file stored on SSD / NVME but not dedicating the entire drive to that task. I dont really want to virtualize everything, just leverage a virtual volume on the physical system. I am going to try with IMdisk but curious if anyone else had success doing something like this.
Revisiting this thread because recently I joined an Intel 750 400gb NVME PCI-E drive with its U.2 Sister version of the same drive using MS Storage Spaces. The configuration utility gives you like two choices either to mirror them or combine them. I chose to combine them and found the performance the same as one drive. According to my motherboard manual using the only 4X slot provides direct lanes to the cpu and the U.2_1 port I am using on the Z270-WS motherboard also provides 4X lanes that are not shared such that my configuration on this particular motherboard are as follows. I have two M.2 Samsung nvme drives in Raid 0, and the Sata 1-4 ports are in use. Sata 1-2 occupied by DVD drives and Sata 3+4 running a Sata raid. So like I stated earlier that leaves me with the only 4X slot hosting an Intel 750 400gb drive and the U.2_1 hosting the other Intel 750 drive. Theres a U2_2 port that shares bandwidth with the M.2_2 port and the last Sata 5+6 ports share bandwidth with the other M.2_1 slots.
This all brings us back to why the performance is bad. Can not even turn on disk caching in device manager for one thing. Need another software raid option besides what Microsoft provides. Any recommendations?
@davidm71 , I have given up on storage spaces for now. It seems only fit for masive deployments and the performance was not so good for me. I was thinking of Intel RST but it is limited to only 64gb. Would be a sweet mod to be able to exceed that limit. Other options are Enmotus Storemi/Fuzedrive (tiering) or PrimoCache (caching). I’m going to test primocache. If Intel had raid 6, I would use that (with a backup to a single drive). Other options, get an expensive controller with maxcache or cachecade support - not great ones.