I’m on Windows 8 x64 using Intel RST driver version 12.8.0.1016, originally installed together with the interface app and event monitor service.
4 identical platter drives in RAID 5
The symptom:
I experience huge latency (more than 1 second roughly) when using the RAID, if it has been idle for more than 30 seconds or so. If I continuously read from/write to the drive, it works normally. If I wait for a minute, and then attempt to read a file, I experience huge latencies every time.
I found this with google:
http://hardforum.com/showthread.php?t=1659226
But that doesn’t help me, the stand-alone F6 drivers are identical to the ones that were put in place by the SetupRST installer, so when I attempted to update the driver like is advised in the link, windows just told me it’s already installed.
I’m not sure I know how to safely remove/re-install the driver.
What should I do to fix this issue?
I forgot to mention that I also tried disabling Link Power Management with no results.
@ boogagiga:
Welcome at Win-RAID Forum!
The advice given at HardForum was to
- uninstall the Intel RST software (RST Console and RST Services) from within the Control Panel > "Programs" and
- to manually install the latest Intel RST driver.
The latest 32/64bit Intel RST(e) drivers are v12.8.6.1000 and can be found >here<.
If you should get the message, that the "best/newest" driver is already installed, please use the "Have Disk" button during the driver update procedure.
Regards
Fernando
I’m sorry. I didn’t clarify in the original post what steps I went through.
I downloaded from Intel’s site the f6 zip instead of the SetupRST installer
I uninstalled the SetupRST service and interface
I then tried to go to Computer Management and update the driver using the f6 drivers I had just downloaded, but it said there was no reason to update the driver
I did not try the Have Disk method, I will try that now with the same drivers just to test and see if it circumvents the “already installed” issue.
Afterwards, I will just use the newest driver you provided, as it is newer and should have no update issues regardless.
Thank you for the quick response, I will let you know how it goes.
It is possible, that the Intel RST driver v12.8.0.1016 is still running although you have uninstalled the RST software. In this case the message is ok, because you obviously tried to install exactly the same driver version the OS already was using.
Ok, so I did the following:
1) I used the Have Disk button to install the same drivers (v 12.8.0.1016) without it telling me that they were already installed.
2) Restarted the machine
3) Tested the latency issue. No change
4) I then updated to v12.8.6.1000 via the previously given link.
5) Restarted the machine
6) Tested the latency issue. No change
Is there anything else I can try?
It’s more than just an inconvenience. For example, while playing Mario Kart Wii on the Dolphin emulator, it freezes at certain points in the map. Whenever the first player starts the final lap, the game has to load the sound/animation/etc and it freezes for literally over a second before continuing on.
Since this totally kills the game, I currently have a little program that runs in the background that accesses the drive every X milliseconds, so that I don’t run into this issue, but I don’t see that as a good solution to my problem.
Have you ever tried to run the Intel SATA Controller of your X79 chipset board in RSTe mode using the original Intel RSTe drivers of the v3.x.x.xxxx series?
After updating my BIOS to fix the fatal (crash/BSODs) errors I was originally getting with the RAID using RSTe, the bios gave me the option to choose between RSTe or RST, but no matter which I chose, the RST Option Rom would always show up during boot. So I set it to RST.
If I try to change it to RSTe, will I need to back up the information on the RAID volume beforehand?
Questions:
1. When you updated the BIOS, did you use the original or a modded BIOS file?
2. Which Intel RAID Utility version did you use, when you created the RAID0 array?
Yes, you should do a complete data backup of your RAID volume.
If you want to switch the Intel RAID ROM mode from RST to RSTe, you probably have to do the following:
- Break the currently working Intel RST RAID array.
- Optional: Do a "Secure Erase" of the formerly RAID0 array members.
- Switch the in-use Intel RAID ROM from RST to RSTe.
- Create a new RAID0 array by using the Intel RSTe v3.x.x.xxxx Utility interface.
- If your currently used OS is on the RAID0 array, do a clean OS installation onto the freshly ceated RAID0 array.
- Original BIOS from ASUS website
2. Version 12.7.0.1936
Just a note, this is a RAID 5 not a RAID 0, in case that changes anything.
I will transfer the files, follow your steps, and let you know how it goes.
Thank you
I was unable to successfully switch the BIOS over the RSTe, I talked to ASUS and couldn’t resolve it. So I’m back to RST v12.8.6.1000 with a RAID 5, I will just have the script that periodically accesses the drive start on start up.
I have a few questions though.
Isn’t a 4 drive RAID 10 array supposed to be able to read at roughly 4 drive RAID 0 speeds? Also, online people were claiming that RAID 10 was superior for writes as well when compared to RAID 5, because it doesn’t have to calculate parity.
I did tests with both RAID 5 and RAID 10. The write speeds of RAID 5 were roughly 50% faster than RAID 10, and the read speeds were roughly the same…
Both tests were done with write back cache, and with huge files. Do these numbers make sense? Does Intel’s RAID 10 not utilize all the drives on reads?
Either way, as for this issue, I’m just gonna deal with it for now. Thank you for all the help.
That is not easy to understand, because all Intel X79 chipset mainboards natively have an X79 SATA RAID Controller (DEV_2826) and not an Intel Desktop/Workstation/Server Express Chipset SATA RAID Controller (DEV_2822). Has ASUS switched the RAID Controller DeviceID of the Rampage IV Extreme without giving the users the option to revert to an original Intel RSTe RAID OROM v3.x.x.xxxx?
and were designed for using the So I’m back to RST v12.8.6.1000 with a RAID 5, I will just have the script that periodically accesses the drive start on start up.
You can find a comparison between RAID10 and RAID5 within >this< article.
I went from BIOS 1101 to 4503. In 1101, there was no option, just RSTe. In 4503, there is the option of RST or RSTe, but (at least for me) I can’t get the RSTe option rom to show up when choosing RSTe in the BIOS, which means I can’t even attempt to install the RSTe driver as it checks to see if there is compatible hardware before installing. The installer told me there is no RSTe compatible hardware. The RST software, however, installs with no issues. While I was on BIOS 1101, it was the exact opposite. The RSTe option rom showed up during POST, the RST driver installer would not let me install, while the RSTe driver installer would work.
Also, I was hoping for your personal insight on speed specifically with Intel chipsets. I have already read that exact text (from a different website), and it contradicts my findings.
For example,
“The RAID 5 performance in the read operations is quite appreciated, though its write operation is quite slow, as compared to RAID 10. RAID 10 is thus used for systems which require high write performance. Hence, it is very obvious, RAID 10 is used for systems like heavy databases, which require high speed write performance.” (corrected)
My RAID 5 gets 50% better write speed than RAID 10, which I don’t have an issue believing (because for my computer setup, the parity calculations should be negligible), but the read speeds are the same between my RAID 5 and 10. The 10 should be noticeably faster in theory, since it can technically read from the four disks as if it was RAID 0.
So, I was asking if, in your experience specifically with Intel RAIDs, do these findings make sense?
Since I don’t have any own experience with RAID5 and RAID10 arrays, I cannot answer your question.
I finally figured out what was introducing the latency. The drives themselves.
I guess I didn’t mention it in the original post, but I was using four 4TB WD Red drives. Up until a few days ago, I didn’t realize that the Red drives use the same power saving Intellipower tech as the Green drives. Thinking that this could pose a problem (since the drives might slow the drive’s spindle to conserve energy) I ordered four 4TB WD Black (yes, the newer gen) drives, and just tested them. No latency issues at all.
Not sure if the latency was caused because one or more of the Red drives were defective, weird firmware issues, or because of their Intellipower tech. But changing the drives definitely fixed the issue.
Thanks again for all of your help.
Thanks for your feedback.
It is fine, that you found out the origin of the issue and how to solve it.
To make this clear for visitors of the Forum, I have marked this thread topic as being solved.
Merry Christmas!