RAID0 performance degradation over time


For certain period of time I’ve been noticing sequential speed loss on my HDD RAID0 build of four 1TB Seagate drives (ST1000DM003).
Once I build this array and restored my data, I measured performance with CrystalDisk and it was like 600/600MB/s, after few months it was like 500/500, now it’s most likely 380/240MB/s that’s around a half of the initial speed.
Recently I decided to workaround that issue and that’s what I found out:

If i just shrink some space form the main partition and create new one with the same properties (4KB cluster size) I see the sequential speed increases dramatically on new partition. So it’s definitely not a hardware issue.
I wounder why could that happen? This volume is fully defragmented and space used doesn’t exceed 40% of total array capacity.

Any suggestions are welcome.
And thanks for the attention.

PS Rig info:
OS: Win7x64 SP1
Mobo: ASUS Rampage 4 Extreme (X79)
RAID Drivers: RST 13.1

@ Dartmaul:
Welcome at Win-RAID Forum!

  1. The benchmark test results from an empty partition are always better than those taken from the system drive.
    2. The longer you are running an OS, the slower will be the performance (due to a growing registry and an increasing amount of tools/Services running in the background).
  1. The best way to shrink the registry and to recover the initial performance is to do a clean re-installation of the OS after a certain amount of time (depending on the usage of the computer).
    2. Please run msconfig and check the "Services" and "Startup" programs running in the background.

This is not the best Intel RAID ROM/Driver combination for an Intel RAID0 system.


Thanks for a quick response.

This drive isn’t my OS drive, I’m using SSD for it. Also it carries very few apps those are on autorun or are able to access the drive frequently, so basically it idles most of the time. It’s just a storage space I use to store large amounts of data (music, films etc).
Also, the amount of data stored on the drive hasn’t changed significantly during these months, but speed has.
Once I build this array I filled it with my data first, then measured performance, just to see the actual speed and it was 600/600. That’s the reason I’m asking.

I’m using the lated bios for my mobo, so I can only roll-back my drivers to some 12.* version, if it matters.

Also I have another one SSD with fresh installed Win8.1, and I also performed the same experiment under this OS. Absolutely same behavior.

Thanks for the clarification regarding your system configuration.
Under these circumstances I don’t have any idea regarding the reason for your RAID0 performance drop.
Offtopic question:
What was the reason to put your 4 HDDs into a RAID0 array with the risk, that all your storage data will be lost at a failure of any of these 4 HDDs?

Pretty common question to hear, it’s my 2nd array build of 4 hdds.
1st reason is that I sometimes need really high sequential write speed to fraps games with multi-monitor resolution. It’s about 1GB per 12 sec of video.
2nd, well, overclocking is my hobby and I like my system to be fast by any means. Kinda speed over reliability.
and 3rd, in my experience it’s pretty rare case when HDD suddenly dies without previously displaying any sighs of problems. Most of the time reallocated sector are shown up, and you have enough time to recover your data. Also I never use any HW (including HDDs) longer than it’s warranty period.

And off course I make backups of any important data.

Back to the thread topic:
Today I would research any advanced defrag options (including bitmap and mft defrag), and also a way to get rid of any empty clusters between files.
I guess some of these actions may help. I would report about the results.

That would be fine.
Don’t forget to check the health of your RAIDed HDDs.

Seems that I finally solved the problem.

I’m not sure what exactly caused it but I performed just a few actions so not so much to choose from.
I found a small console app called Contig, it allows to defrag mft, and also can show so called “free space fragments”. I guess it’s an amount of free zones between files on a disk. After precisely defrag I manage to reduce this amount in two times (at was about 14k when I started).

It really seems to me that CrystalDisk doesn’t decide the exact disk zone where to test, and may fall in small free zones those cause sequential test to became more random.
I use this test app because it uses the same algorithm as windows file management does, so it shows more real values.

By the way I just recalled one funny case about RAID and data safety:
One day one of my friends asked me to help him with his system OC. And he had OS on RAID 1 (mirror), considered as the most safe array type.
During the OC, his system declined to post so I just cleared CMOS and booted with stock BIOS settings. Since I totally forgot about his RAID, and RAID mode is disabled by default I just booted from a single drive and continued with OC.
Once I finished with figuring out the optimal overclock settings I turned RAID back on that caused data loss because of the fact I booted from a single drive and broke the data synchronization of the array.

So since that moment, RAID0 = safe, RAID1 = not safe

@ Dartmaul:
Thanks for your feedback.
It is fine, that you finally were able to find the origin of your benchmark drops and to solve your problem.

Haha, that is a funny, but - as you know - an absolutely wrong conclusion of your experiments.

I suggest sequential speed may become one of challenging benchmarks to compete in)