Unexpected NVME RAID-0 performance levels

Hi, all, my first post here, as I’ve never had an issue with RAID that I could not sort out myself eventually,
but this one is new to me…

I just upgraded my PC (finally) to take advantage of NVME raid-0.

Gigabyte Z390 Gaming SLI (Z390)
9700K cpu, fast ram, blah blah and a shiny new pair of Silicone Power SP001TBP34A80M28 ssd’s.

The SSD speeds are given as 3400mbs and 3000mbs and I’ve read reports where those figures are exceeded.

I layed the SSD’s into the 2 motherboard mounting slots and set up the raid-0 in BIOS, installed Windows 10 Pro,
(the new v.2004).

I’ve been trying a few different benchmarks, but I am seeing some disappointing results and I wonder, before I start
doing stupid things, if anyone can point me in the right direction, as I expected this raid array to perform out of the box,
like all my RAID arrays have in the past, legacy and SATA SSD. I am new to NVME.

conraid.jpg



I really was expecting to see speeds in the order of 6000mbs, silly me !

@Congo :
Welcome to the Win-RAID Forum!

According to Anvil’s benchmark tool your Intel RAID0 array is managed by the Win10 in-box Intel RAID driver v15.44.0.1015, which is a SATA RAID and not an NVMe RAID driver.
Only the v16 and v17 platform Intel RST drivers do fully support the NVMe protocol.
Solution: Update the driver of the “Intel Chipset SATA RAID Controller”, which is listed within the “Storage Controllers” section of the Device Manager, to one of the latest v16 or v17 platform Intel RST drivers or install the related complete Drivers & Software Set (this way you will get additional access to the Intel RST Control Panel). You can find the download links >here<.
If a simple driver update should not be possible, I recommend to do a fresh install of the OS, to delete the already existing partitions and to load the desired “pure” v16 or v17 platform Intel RST driver from an USB Flash Drive. The complete Intel RST Drivers & Software Set can be installed once the OS is up and the .NET Framework 3.5 has been installed (is required for the installation of the Intel RST Drivers & Software Set).

Good luck!
Dieter (alias Fernando)

P.S.: I have moved this discussion into the better matching “RAID Performance” Sub-Forum.

I followed your advice Fernando, all except reinstalling Windows, which I can try, but I will ask you a question before I do, and report my results.

I downloaded the new driver 17.911.009 and installed through Dev Manager, rebooted and tested again with similar results as before.

I then ran the setupRST.exe (17.911.009) and rebooted, tested again with similar results.

Anvil’s benchmark tool now reported storage driver as iastorAC after the new drivers were installed, whereas before they were shown as iastorAVC

Do you deduce the driver version shown on anvil’s tool from the storage driver shown as iastorAVC ?
(you quoted v15.44.0.1015), as I do not see that information on the tool screenshot.

However, on my old win7 PC with a SATA SSD array, Anvil’s tool does show the driver version clearly as iastorA 15.2.0.1020,
leaving me to wonder why the complete version number is not showing on the new PC when I run Anvil’s tool.

17.9.1.1009 is now shown in device manager.
Installing the new drivers by .inf and then later by setupRST.exe both went smoothly with confirmation by info boxes.

Will a windows re-install with the new raid drivers loaded at setup be a solution, or do I have another problem do you think ?

conraid2.jpg



Aida64 shows me a version 10 driver and the new driver under different info boxes…

conraid3.jpg



conraid4.jpg

@Congo :

Yes.
You can find the version of the Win10 in-box Intel RAID driver here:


[quote="Congo, post:3, topic:35242"] wonder why the complete version number is not showing on the new PC when I run Anvil's tool. [/quote] It will show the driver version after having installed the complete Intel RST Driver & Software Set. [quote="Congo, post:3, topic:35242"] Will a windows re-install with the new raid drivers loaded at setup be a solution, or do I have another problem do you think? [/quote] You should find it out yourself. Maybe a look into the start post of >this< thread will help you to get the best possible performance.

Thanks for the advice and the link Fernando.

What sort of speeds should I expect with these NVME SSD’s in Raid-0 ?

I’ve read claims about high speeds with these modules, but not in RAID.
All my Raid arrays in the past show nearly double the speed of a single drive,
should I expect the same with these or don’t they scale the same?

Also, regarding the last two screenshots I sent, the first one shows the
Intel Raid 0 Volume driver as an old 2006 version 10 driver, and the final screenshot
shows the controller with the new driver, do you think this is significant ?

Regarding the link you sent above:
How to boost the Intel RAID0 performance

I am aware of the optimizations addressed in that thread, and I will go and double check
those settings again, but they are, from what I can see, performance tweaks that may
have some effect, but I’m not sure that they address what looks to me like an underlying problem
here, where I am clearly losing a lot of potential from my array, not simply a matter a of
a few performance tweaks.

I’ll keep investigating and if you have any other thoughts on this, I’d welcome your ideas.

@Congo :

No, both screenshots show the drivers of completely different devices. The driver of the "Disk Management" (shown for all HDD/SSD, which are listed within the "Disk drives" section of the Device Manager) is the MS own driver named disk.sys, has the same version as the OS and cannot be replaced by any other driver. When it comes to the performance, the only relevant driver is the one for the related "Storage Controller" (here: Intel NVMe RAID Controller).

Your benchmark results are not bad, if you compare them with the ones I got with an Intel NVMe RAID0 array (look >here<).

It is an illusion to think, that the creation of a RAID0 array consisting of 2 SSDs will double the performance of a certain system. Only the benchmark results while processing very large files may be doubled, but not the performance while doing your daily work, where the big majority of the processed files are very small sized. Look at the 4KB scores, when you compare the performance of a single (non-RAIDed) SSD with a RAID0 SSD array.
The performance gain of an NVMe RAID0 array is only noticeable, if your main computer work is the processing of very large files (e.g. Video encoding).
>here<After having seen my recent RAID0 vs. non-RAID benchmark comparison results (look , I decided to stick with a very good single NVMe SSD instead of taking the doubled chances of a complete data loss by running a RAID0 array.

There is nothing wrong with my old PC setup, the main reason I upgraded was to take advantage of the new storage devices.

After looking here to a review on my new SSD’s, I can see real world performance in game loading is better with my old Crucial MX500 SATA SSD’s !
This was not a nice revelation after spending a hard won $1500 in search of performance…

https://www.tomshardware.com/reviews/sil…ssd,6180-2.html

This still does not explain why I don’t see 3400 / 3000 speeds in my synthetic benchmarks, or even much more than that, considering the new SSD’s are a raid set.

In fact, the speeds are significantly under what even a single drive should be, and out of range of many other test results I’ve seen reviewed by these SSD’s.

I’ve got some work to do, so I will report back if I figure this out, thanks for your help.

BTW… I am interested in synthetic benchmark results, purely because I then have something to compare with, so I know if I am configured correctly.
----------------------------

Could this be my bottleneck, or is it just advertising hype? (First paragraph - DMI limitation?)
https://silentpc.com/articles/nvme-raid-solutions

Though, my speeds don’t seem even close to saturating those figures.

I recently read that the limit for each PCIe lane is 1000mbs, so I figured on 8x lanes total available.

But if there is a 4gb/s DMI bottleneck, that is something I wasn’t aware of.

Anything going through the PCH and not directly through/to the CPU yes will be limited be DMI 3.0
Some systems have PCIE lanes directly connected to the CPU and do not run through PCH, thus will not hit that limit.
That article explains it all

I also discovered that my two “identical” Silicone Power SP001TBP34A80M28 ssd’s are not identical.
They have different hardware ID’s and they use completely different onboard controllers ! (Silicone Motion and Phison respectively)

I am seeking rectification from the vendor.

I re-installed Win10 on another array, a 2 x 2tb 7200rpm seagate raid0. I DID NOT change the bios config
from the UEFI nvme compatible setup. I also tested those disks as an array and individually, with normal
data rates as expected in each case, also tested was a 2 x 1Tb Crucial MX500 SSD array and it tested normally
as well, both of those arrays performed at almost double their single drive data rates in the synthetic benchmarks.

I deleted the new nvme 2tb raid0 array and got terrible results on those nvme ssd’s as individuals, with one of the
drives showing appalling read rates. Something is definitely wrong.

@Congo - Did you build the NVME array in BIOS itself, or in Intel CTRL+I RST interface? Test both ways, and also test via normal windows software stripe too (ie no RAID built in BIOS or Intel CTRL+I) for comparison as well
Also, you may want to try PCIE adapter cards instead of the onboard slots, sometimes that is faster as well. You can use the cheap $5 adapters, I use these - https://www.amazon.com/JMT-M-Key-Adapter…s/dp/B07ZB8BSNT
Do both of your drives in the current slots, show up as X4 PCIE 3.0 in CrystalDiskInfo when selected? You’ll have to check that while not in array I believe.
DMI 3.0 is limited to just under 4GB/s

Hi Lost_N_BIOS,

Thanks for the reply. I’m going to forget about RAID for now, these SSD’s seem to have issues on their own,
and I think I’m going to have trouble with my vendor replacing them, but we’ll see.

The Motherboard (Gigabyte Z390 Gaming SLI) bios has an Easy Raid mode and an Advanced mode.
The Easy RAID setup has a GUI interface where you select from a graphic pic of either a SATA or a
PCIe device, pretty simple.
The advanced screen is pretty much a normal IRST raid setup screen. I have created and tested raid 0 in both.

I have not created a Windows raid or a “legacy” (Ctrl+I) raid on this board as yet. (Edit: ok, gonna give this a try)

I might try your suggestion Ctrl+I, but I thought that was legacy raid only.

CrystalDiskInfo only says NVME Express 1.3

I suppose I could try the PCIe adapters, but I would have thought the motherboard connectors would be fine, but you
never know I guess…

I’m not ruling out the motherboard or it’s bios as the fault either. I have tried three bios versions.
After having bought two “identical” SSD’s and finding a different controller on each one has been a low blow,
I’m not happy about this.
(Silicone Motion and Phison respectively)
-------------------------------------

Here the two NVME drives are independent, and some Aida64 info is beside the benchmark results for each drive,
showing the different controllers. Also, many architectural differences are shown for each SSD, (not shown here),
it’s not just the controller that is different. Remember, these are Silicone Power SP001TBP34A80M28 nvme ssd’s,
bought at different outlets of the same store, due to stock shortages at the time. They are NOT the same SSD’s
even though the model number is identical !

ssds.jpg



Below, we see a RAID 0 result with the same modules. This is a Windows disk management Striped Array,
which converts the drives to a dynamic disk and is useless as a boot drive.
Interestingly, the array has the same read speed as the Phison controlled SSD as a single drive. (drive D: in the pic)

CrystalDiskMark is giving me higher synthetic scores than Anvil’s tool.

Win10Raid.jpg



I did try the Ctrl+I legacy setup, but only after re-configuring BIOS to legacy boot, otherwise the Ctrl+I option is not visible.
I could not see the NVME SSD’s there, only the Boot Raid0 array was visible, (2 x 2Tb 7200rpm Seagates)
I pretty much expected that though.

Overall, what I’ve seen so far is that these higher end Silicone Power SSD’s are not performing as others are reporting they do.
I’m really not sure why. I’d like to blame the motherboard or it’s bios, but at least the legacy and SATA SSD arrays are working
as expected on this new PC, it’s just the NVME PCIe ssd’s that I’m having the issues with, despite carefully setting the bios up
and trying to be consistent with my testing. I get quite different results in the synthetic tests as well, so this is puzzling me.

Crystaldiskinfo you need to look here, as shown in this image. Make sure each drive shows as x4 PCIE 3.0

CDM-NVME-PCIEWidth.png



Test all ways I mentioned, so you know which works best for you.

Yes, those NVME do not match But this shouldn’t limit speeds, only DMI 3.0 will

Hi Lost_N_BIOS,

The transfer mode field was blank when I tested and reported my results, which I thought was odd.
Now, CrystalDiskInfo does not see my disk at all, it’s completely blank.

I tested all the modes except for Ctrl + I legacy mode, as that mode did not see the nvme SSD’s.

I have made a new benchmark with the same Windows 10 OS imaged back onto a newly created Raid-0 array on the nvme SSD’s, configured through IRST in BIOS, same as last time, but this time I used 16kb stripe size, which I thought I did originally.
I tested with CrystalDiskMark7 and it gave me a result shown here:

ssd-good.jpg



Anvil’s Tool still shows considerably less speed, though, (2000mbs read - 3000mbs write)

I feel like an idiot buying into this system without understanding the DMI 3.0 bottleneck, and now realise that I should have just bought a high speed 2Tb SSD. I got sucked into the hype this time, and I don’t usually do that.
I did see some amazing benchmarks when I was first looking into NVME Raid, and I simply did not understand the limitations on Intel chipsets, and I am really embarassed about getting caught out by my own ignorance.

The Z390 chipset Block diagram:

Z390 Block Diagram mod.jpg



As a gamer, I use the full x16 PCIe slot for my graphics card, so it looks like I cannot share any PCIe lanes for the new NVME SSD’s, even if I did buy addon PCIe adapter cards for the SSD’s. All the rest goes through DMI 3.0, so the SSD’s can only share that bandwidth with the other devices sharing the DMI bus, I get that now, and why I cannot see any dramatic synthetic benchmark speed increases using RAID.

So, other than feeling very silly, there may be one thing I can contribute to the RAID debate, based on a long time using Windows Raid-0 boot arrays. This is not the purpose of this thread, but it has a little bit with why I am here to begin with.
My personal experience, despite all the "evidence’ to the contrary, is that RAID-0 has made my Windows experience a lot better in that it has, in the past, massively reduced wait times when I am working with large data transfers and during backup operations. Working back to back with non raid-0 systems, I really noticed the difference in transfer rates. I do a lot of this type of work, so I naturally thought that NVME RAID would scale as I was used to all these years… I was wrong, at least as far as this intel chipset goes, and I know understand why some people are moving to AMD chipsets to get fast nvme raid arrays set up. A week ago I thought people were silly to buy into AMD, but now I get it.

Thanks Fernando and Lost_N_BIOS for all your help and patience with my thick head.

I’m going to do some research on whether or not I can split off 8 lanes of PCIe for my graphics card and still maintain game peformance.

My motherboard supports PCIe lane bifurcation in BIOS.

So, my first question is do I need all the PCIe bandwidth for my graphics card.

Then, if I split off 8 lanes to the other 16x slot (which will then have x8 PCIe available), is this available to nvme ssd devices?

Then, is that PCIe cpu controlled slot able to be Raided with one of the PCIe motherboard SSD connectors that runs through the DMI bus?

@Congo :

If you choose a higher stripe size (e.g. 32KB or 64KB), you will get higher Seq 4MB READ and WRITE scores.

Thanks Fernando.

I normally choose a 64kb stripe for my legacy Raid drives, they are much larger capacity and more tolerant to wasted disk space using the larger stripe size.

I decided to try 16kb stripes on SSD’s to save a little space, as I will be loading a lot of data to them. For example: Just one of my flight simulators, Condor2,
has over 1.2 Tbs of HD scenery data. Condor2 can pause during texture loads, so it’s important to have the textures load quickly.

Another reason I chose the 16kb stripe size was that after reviewing a test where some guys did a real world analysis of RAID-0 performance based on similar usage
to what I do with my PC, their tests concluded that 16kb stripes were most efficient overall. So, I did not pick 16kb just because I like the number :slight_smile:

Another thing perhaps, I suspect I am already bottlenecked with the DMI 3.0 issue, and I won’t see any more increase in speeds in synthetic testing, and I’m not sure
if 64kb stripes will make any difference to real world performance, perhaps you have some thoughts on this?

@Congo :
My recent comment has been written, because I had the feeling, that you may have underestimated the impact of the RAID array’s stripe size on the benchmark results.
The stripe size is very important, if you want to compare the benchmark results of your specific RAID array with the ones, which are published somewhere else.
For RAID0 arrays I usually set the stripe size to 64KB. A lower stripe size may be more efficient regarding the effective data transfer, but it will give you lower benchmark scores.

Ah Yes! For the purpose of comparison, thank you.

one thing i can say to all that is the stripe size your controller defaults to is its “native” size, it is more effective at that size and is the most latency friendly config your controller is capable of! and i can tell you that straying from that at all will cost you the real time power needed to light an effective array! the prob is that vm is only used readily if the bandwidth and latency is favourable if not it will just focus on the infrastructure it has…system mem! so you need to emulate the bandwidth and latency of you system memory before it will even begin to harness more than one in the array, and you need to have adequate sysem bandwidth for the base lvl funcionality which a ryzen will give at 3200mhz+! also the only way to acheive low latency ram like performance is to play with the amount of ssd’s till you find the sweet spot! like memory if you want true paralell performance it needs to be identical and balanced…i have 10 250g samsung 970 evo+'s and i tried all configs, a five-five ratio was the sweet spot! 1 more or less than five and i was better off with 1 for latency reasoning…but 5 it operates at the same lvl as ram…my system utilises vm like system mem and tweaking vm and io passthrough gave my system insane lvl’s of memory sharing/parsing to the point that load sharing between cpu/gpu is as native as load sharing between the cores! obviously i use a threadripper and it absolutley negated the latency that tr’s suffer from alowing it to not only go against the trend of tr’s low gaming capability, but transforming its massive paralelilng power traditionally only good for massive loads like large rends into massive real time power giving it insane gaming performance and stability of the highest lvl! also how much that can be mapped at once by your die is and issue, all raid 0’s i see go all out to 8 cause its the max number for nvme raid…but do not take into account that it takes 32 pcie lanes for that…256bit worth of adressing…not even a dual cpu system like a threadripper can do that because at the very least it needs a few lanes free to operate the chipset and actually stream enougth to fill that massive array or else its like having 8 swimmimg pools but only 1 hose to fill them all! and furthermore i run my os of 1 and programs/games off the other…with adequate hardware & optimal config the raid 0 unstable thing is absolute bs!! i mean ffs, you ppl talk as if youve been pro with comps for years, has no one ever seen an ever so slightly unstable dual channel config? let alone a quad channel that was anything less than perfect? both will give exactly the same lvl of instability! striping “IS” paraleling, which “IS” virtulization! duuuuh! y ppl understand this with ram but acually think raid is anything different in concept is a mystery to me!?! oh and before you ask "y is none of this relevant predating nvme? latency is the answer! put simply even raid 0 was a joke b4 nvme technology! the only machines that benifited from it were servers/workstations which lacked the real time power network to an array of any decent width anyway! so basically focus on "simultaneous-multi-threa…i mean raid 0…yeah!

@DUDE111 : Welcome to the Win-RAID Forum!
Please post 2 or 3 pictures showing your benchmark results with different RAID0 configurations.

To be able to believe you, I need more information about your system (mainboard/chipset, OS and in-use NVMe driver).
Regards
Dieter (alias Fernando)