There are a lot of users, who just want to see or to show someone else their extremely high benchmark scores without taking care about whether they benefit from the high scores while doing their daily work.
A much cheeper solution for such users would be to run their single SSD in “RAPID Mode” using Samsung’s Magician.
Since there are new Intel RST drivers of the RST v16 and v17 platform and a brandnew Win10 19H1 Insider Build 18323 available, I decided to do a clean install of the new Win10 19H1 Build onto my already existing RAID0 array and to repeat the benchmark comparison tests, which I had published a few days ago.
I wanted to know,
a) whether the NVMe RAID0 performance depends on the in-use NVMe driver version and - if yes -
b) which is the best performing Intel NVMe driver for an Intel RST NVMe RAID0 array.
Test system:
- Chipset: Intel Z170 (mainboard: ASRock Fatal1ty Z170 Prof. Gaming i7, BIOS version: 7.50)
- RAID0 members: 2x250 GB Samsung 960 EVO SSDs
- Usage of the RAID0 array: as system drive
- OS: Windows 10 x64 Pro Insider Preview Build 18323, freshly installed in UEFI mode
- Intel RaidDriver BIOS module: RST v15.5.1.3017
- RAID0 stripe size: 32KB
Tested NVMe drivers:
- Intel RST RAID driver v14.8.18.1066 WHQL dated 09/06/2017
- Intel RST RAID driver v15.5.2.1054 WHQL dated 04/24/2017
- Intel RST RAID driver v15.5.5.1059 WHQL dated 06/01/2017
- Intel RST RAID driver v15.44.0.1010 dated 02/07/2018 (= Win10 in-box Intel RAID driver since RS4)
- Intel RST RAID driver v15.9.4.1041 WHQL dated 03/20/2018
- new: Intel RST RAID driver v16.7.10.1030 WHQL dated 11/16/2018
- Intel RST RAID driver v16.8.0.1000 WHQL dated 12/03/2018
- new: Intel RST RAID driver v17.0.1.1075 WHQL dated 12/28/2018
Here are my test results:
-
Intel RST driver v14.8.18.1066:
-
Intel RST driver v15.5.2.1054:
-
Intel RST driver v15.5.5.1059:
-
Intel RST driver v15.44.0.1010:
-
Intel RST driver v15.9.4.1041:
-
Intel RST driver v16.7.10.1030:
-
Intel RST driver v16.8.0.1000:
-
Intel RST driver v17.0.1.1075:
Ranking of the best performing Intel NVMe RAID drivers for my Z170 System:
- Intel RST drivers v16.8.0.1000
- Intel RST drivers v16.7.10.1030
- Intel RST drivers v15.9.4.1041
- Intel RST driver v15.5.2.1054
Conclusion:
The measured benchmark scores were so close together, that I doubt, that the user will notice the performance differences.
@100PIER and other interested users:
Since there are meanwhile some new Intel RST drivers and a new final Win10 version available, I have yesterday done some new benchmark comparison tests with my Z170 system.
Test system:
The same as in January 2019 (post #17), except the OS, which was now Win10 x64 Pro v19H1 Build 18362.175, freshly installed in UEFI mode.
Tested Intel RST RAID drivers::
- Intel RST v14.8.18.1066 WHQL dated 09/06/2017 (latest from the v14 platform)
- Intel RST v15.44.0.1010 dated 02/07/2017 (Win10 in-box RAID driver since RS4)
- new: Intel RST v15.9.6.1044 WHQL dated 03/01/2019 (latest from the v15 platform)
- new: Intel RST v16.8.2.1002 WHQL dated 02/27/2019 (latest from the v16 platform)
- new: Intel RST v17.2.6.1027 WHQL dated 03/19/2019
- new: Intel RST v17.2.11.1033 WHQL dated 05/07/2019 (currently latest from the v17 platform)
Here are my test results:
-
Intel RST driver v14.8.18.1066:
-
Intel RST driver v15.44.0.1010:
-
Intel RST driver v15.9.6.1044:
-
Intel RST driver v16.8.2.1002:
-
Intel RST driver v17.2.6.1027:
-
Intel RST driver v17.2.11.1033:
Ranking of the best performing Intel NVMe RAID drivers for my Z170 System:
- Intel RST drivers v17.2.6.1027
- Intel RST drivers v15.9.6.1044
- Intel RST drivers v17.2.11.1033
Conclusion:
Compared with my previous tests done in January I got now
a) generally better scores (probably due to the meanwhile well developed Win10 Build) and
b) as winner the Intel RST v17 platform v17.2.6.1044 RAID driver.
EDIT at 06/19/2019: Since I forgot to enter the results I got by using the currently latest Intel RST RAID driver v17.2.11.1033, I have added them today. It is interesting, that this latest driver is obviously less performant than the previously released v17.2.11.1027 driver, which belongs to the same development branch v17.2. So the added results didn't fundamentally change the performance ranking.
@Fernando ,
Very interesting test results.
On my side I have sell now my HPT NVMe RAID x16 4xM.2 PCIe V3.0 add-in card for non usage at home.
So, now I have only the opportunity to test SATA6G RAID0 configurations with Intel v17.2.11.1033 drivers handeling some old SAMSUNG SSD 850PRO and 840PRO devices on X99 under v19H1 Build 18362.175.
ANVIL Test results are quite good and stable with the more recent Intel drivers:
Under Z390 PC I do use non RAID NVMe devices configuration and also a SATA6G 850EVO device:
May be my test results are at the wrong place.
Due to the fact, that I currently have access to my new AMD X570 chipset system and the still working Intel Z170 system, I took the opportunity to compare an Intel NVMe RAID0 array with an AMD NVMe RAID0 array by using the exactly same RAID0 array members (2x250 GB Samsung 960 EVO NVMe SSDs). Onto the RAID0 array of both systems I did a clean install of Win10 x64 v2004 and made some benchmark tests.
Here are the results (left pictures: Intel RAID0, right pictures: AMD RAID0):
Just for comparison purposes, here are the benchmark results I got today with my new AMD X570 system running Win10 x64 v2004 on a single 1 TB Sabrent Rocket 4.0 NVMe SSD with the generic MS in-box NVMe driver (left pictures) and the mod+signed generic Samsung NVMe driver v3.3.0.2003 (right pictures):
@Fernando @Lost_N_BIOS
Interesting results, I think you are correct about the in-use NVMe SSD models and the AMD RAID driver.
I just banged off the CDM benchmark results using the AMD RAIDXpert2 9.3.0.120 windoze drivers; hadn’t even loaded the 9.3.0.158 windoze drivers yet.
Notably, the Samsungs perform better when they have a larger size.
Samsung 960 EVO
Capacity Max sequential (MB/s)
250GB 3200 1500 (read, write)
500GB 3200 1800
1TB 3200 1900
In contrast the Samsung 970 PRO
Performance2)
SEQUENTIAL READ
Up to 3,500 MB/s
SEQUENTIAL WRITE
512 GB: Up to 2,300 MB/s
1,024 GB: Up to 2,700 MB/s
RANDOM READ (4KB, QD32)
512 GB: Up to 370,000 IOPS
1,024 GB: Up to 500,000 IOPS
Obviously, the larger the SSD the higher the throughput; and along with that the higher price!
Further to this, the correct configuration of the bios could also have a big impact; in particular lack of PCIe bifurcation of x4, x4, x4, x4 lanes within the bios settings could be deprecating your speed on the samsung 960 evo array on the X570 chipset. Meanwhile, the ROG Zenith Extreme has the PCIe bifurcation built into the stock bios so each NVMe SSD has exclusive access to x4 PCIe lanes each. Also when using the Asus M.2 add-in card there is a setting in the bios to grant each particular PCIe slot X16, x8x8, x4x4x4x4 settings for the PCIe add in slot chosen. If your X570 board doesn’t have built in PCIe bifurcation I’m sure @Lost_N_BIOS can make that happen in short order!
@hancor :
Thanks for you comments.
All you have written is correct and wellknown by me. By the way - I have long-term experience with NVIDIA nForce RAID systems since 2006 (look >here<, they called me “Easy Raider” there) and Intel RAID systems since 2011.
That was the reason why I tested yesterday the creation of an AMD NVMe RAID0 array. I just wanted to know how the creation of an AMD RAID array works and whether an AMD RAID0 array is better or worse performant than an Intel RAID0 array, if the exactly same RAID members are used.
Since I use my computer just for office works and not for the processing of very large files (Video encoding etc.), I don’t have the intention to replace my extremely fast Sabrent Rocket 4.0 by any RAID0 array.
Nevertheless I thank you for your comments and the tip regarding the PCIe bifurcation.
@Fernando
Haha…“Easy RAIDer”, you hippee!
I prefer “Win-RAIDer” the “cutting edge” of bios modding!
appropriate in these viral times.
Yes the PCIe 4.0 drives will kick up some dust in performance.
But I knew when I got the ROG Zenith Extreme, that my daughter would be doing some video editing and my wife’s hobby is photography.
Also a brother in law that has a business that digitizes older films, photo editing so ganging 4 NMVe drives in RAID is my next project!
That should top out at a crazy 14000MB/s which should last well until PCIe 5.0 and PCIe 6.0 hit the streets for ‘everyman’!
I agree though if one has less demanding needs PCIe 4.0 NVMe should be plenty fast until they drown us all in 8k to 16k films, videos and photos.
Hi,
Have you guys tried benching soft-raid storage space performance vs Intel Raid? Like you I am getting sub-optimal performance with two M.2 cards raided together. It almost feels as if each card is running at 2X speed. Before I had soft-raided two Intel 750 400gb drives and got better performance than running off of the RST controller with two much faster drives. So I bet if you run your tests against MS storage space raid it will beat Intel raid easy.
Thanks
@davidm71
The main disadvantage of a Software RAID is, that it is not bootable.
That is the reason why I have never tested such RAID configuration.
What about a data drive. So its not bootable. Thats ok. Still curious.
I am definitely tempted to test and compare MS storage against Intel raid motherboard solution. Though I have a couple of questions or rather wonder if in fact when raid is disabled on my Z590 board would each individual m.2 slot be still routed through the chipset where the DMI 4X link bottleneck exists or would each port be unleashed independent of the other at full 4X x 4X speeds.
The other question I have is that once I upgrade the 10th gen 10700K cpu to an 11700K cpu the DMI bandwidth on a Z590 is suppose to double and does that mean I will see a doubling in performance with Intel motherboard raid? Because in the manual it states that it “supports PCIe bandwidth bifurcation for RAID on CPU function” and “Only Intel SSDs can active Intel RAID on CPU function in Intel platform”.
So I think the later concern is only for when bifurcation is involved. Not only that wish someone would mod out this Intel only ssd’s supported BS. Anyhow my next step is to replace the cpu. Will save my benchmark scores before and after and post them.
Thanks
Have some benchmarks to share. First system configuration:
Asus Z590-A 10700K (old) 11700K (new)
Evga 3080 FTW3
GSkill 2x16gb 3200mhz ram
Samsung 980 Pro (single drive configuration)
SK Hynix P41 Gold 1tb x 2 in Raid 0 configuration
Intel RST driver 18.31.3.1036
First score is of the Samsung 980 Pro running at PCI-e 3.0 speeds with 10700K:
Second image the Samsung 980 Pro running at Pci-e 4.0 speeds with 11700K:
Final Crystaldiskmark of 980 Pro running at PCI-e 4.0 speeds (forgot to save pci-e 3.0 image):
Now for interesting part of my tests which show doubling of DMI link affecting MB raid.
PCI-e 3.0 raid 0:
And after processor upgrade to PCI-e 4.0 11700k platform:
So according to this analysis with or without raid according to Anvil drive performance differed about 20 to 25 percent however sequential speeds doubled according to CrystaldiskMark. In anycase first time for breaking through the DMI 3.0 barrier with the raid controller. Pretty exciting!
@davidm71
Thanks for having done the benchmark tests and posted the results.
Questions:
- Why didn’t you use the currently latest v18 platform Intel RST RAID driver v18.37.6.1010 dated 09/19/2022?
- Did you realize the performance gain of the Intel RAID0 Array while doing the “normal” work with the PC?
- Do you plan any test with a “Software RAID” configuration?
- I made a mistake and wasn’t aware that I we were testing version 18.37.6.1010. Will retest and post results however can not go back to the old pci-e 3.0 processor. At very least will be comparison of the two driver versions.
- I do not understand your meaning. Did I realize performance gain while doing “normal” work? Not sure as have not had time to get a feel of the difference except that Windows does feel like it starts up much faster. Next I am going to test how fast it loads a heavily modded installation of Skyrim which takes extremely long. Was primary reason for upgrading system.
- At this point testing “Software Raid” is a waste of time and write cycles in my opinion as the Pci-e 4.0 DMI bandwidth negated any desire to try it out. Perhaps I will test it out with some spare SSDs in the future for curiosity sake.
Thank you
@davidm71
Thanks!
By the way:
- Your inserted Anvil pictures have been cut from a full desktop screen and don’t look perfect. If you want it easier with a better outfit, please have a look into >this< tip given by me a very long time ago.
- You can optimize the information for the viewers of your benchmark pictures by editing the most important information into the tool GUI before starting the test (Anvil’s tool and CrystalDiskMark offer such space for typing such information).
Actually I used the built in ‘save picture’ tool in each respective application. I noticed that too. Will do manual edits next time. Thank you.
Edit: About feeling real world performance increase I can now say that I do in fact feel a difference. Before it would take up to 30 seconds to load all the Skyrim mods. Now it takes like 5 seconds.