Not sure what to tell you. Maybe you are referring to 5700M cards, there it might be every of these voltages have separate VRM, but on 3000M desktop cards it is usually as on the photo i posted above.
Core I/O is separated from PLL, PCI, display voltages (in theory all 4 of these could be separated if the system designer wished so) while on mobile systems all of them are joined in a single rail VDDCI.
As this system is running Polaris as a headless card and display is connected to CPU R7 graphics there is no need to adjust display voltage.
I did compare tables of several BIOS files. When changing memory clock values in PBE it is also a good idea to update the values in TMDSAEncoderControl table. In this example there are 3 mem clk profiles, highlighted is 24F4 = 62500 10khz units, then some bytes that define it as a profile 03523F020500, next is 98AB02 = 175000 10khz units. I use 4 mem clk profiles which is max the firmware supports.
When changing size of this table value at offset 47 must indicate starting offset of last 4 bytes of the table - 18100804
Next difference i found in sapphire rx460 bios ComputeMemoryEnginePLL table. 3 other bios have C8 value at offset 74, only sapphire have 32.
Highlighted this value in atomdis tool as well. Looks like a register address
Update: it is spread spectrum value, also present in data table ASIC_InternalSS_Info. Might help for better stability (probably won’t).
Of all polaris modders out there, nobody ever came across MC microcode?
It is located at offset 38000 (didn’t really looked into this till now as i thought it’s GOP related) in desktop card dumps, referenced in ATOM_MC_INIT_PARAM_TABLE_V2_1 MCUcodeRomStartAddr, however it’s length that is matching length at 3800C doesn’t correspond to entire data length at 38000.
Laptop BIOS don’t have this microcode as it’s 60KB in size, it looks like microcode has been put into MemoryTrainingInfo table, which is non-present on desktop cards
Old tools like atomdis, atombiosreader etc. can’t dump all the polaris tables. I think PBE-xml lists these, but if not they can be found manually by going through driver sources.
CMD table GetVoltageInfo (5250) and DATA table ServiceInfo (6622).
There is no any info on ServiceInfo table(checked against 5.6.14 sources, maybe i missed it or need newer kernel), and it doesn’t appear to be loaded, still some BIOS have populated this table, others have it empty but still present…
And some ugly shit going on here: removed the TV1OutputControl table from BIOS but forgot to remove code from ASICInit that calls this table - GPU still loaded fine without any kind of crash so how the fuck is this even possible.
Any other modification with missing/incorrect byte at such a critical point would result in BIOS crash, here it just missed entire table and it was still fine
@karmic_koala That’s probably where things are differing. Discrete Polaris cards have multiple ways they can be designed where laptops I’m certain would be much more limited in implementation to get everything fitting on the PCB and/or required TDP. I’m actually looking at shipping in another Polaris GPU just to mess around with, RDNA/Vega cards are boring as fuck, completely locked down so all you can do is use dirty software hacks but I’ve not been able to find any candidates I’m really interested in atm. Ideally it would be an RX590, something better than the craptacular XFX models I looked at, but we’ll just have to see what pops up.
Remember that a Polaris vBIOS - any Polaris vBIOS, so yes that goes for the mobile parts too, is a repurposed Tonga vBIOS so you are likely going to find a LOT of messy code, unused code (probably why some vBIOSes have sections just “00” or “FF” where others don’t, those manufacturers actually removed some of the useless code), and obsolete code that should be updated. Due to the Frankenstine nature of a Polaris vBIOS it could be why Samsung memory has OC issues, could be someone forgot to address that part of the vBIOS or Samsung memory support was patched in at a late stage.
I just read about RDNA last night, if i had 130fps in furmark I’d probably give zero fucks whether it could do 150fps or not…
Worth to mention, that replacing stupid hynix 8gb+4gb system RAM that shipped with this laptop for 2x8gb crucial RAM made some difference in gaming.
First thing checked, oclmembench went from 20-21GB/s to 27-28GB/s
Still playing UT4, many of the scenes that were crippling at 25fps or lower at FHD Epic setting showed up to 10fps improvement.
Regarding polaris it looks like i could get 1400/2100 OC but the 240W supply is insufficient and starts crashing. As it doesn’t appear to make much difference between 1300 and 1400mhz on core i will lower it and use available power for memory
I’m an old fashioned enthusiast to me it doesn’t matter how powerful the hardware is I want to maximise the potential and put my own stamp on it, modern hardware doesn’t allow you to do that, hell, modern hardware doesn’t even allow you to remove a cooler for a proper maintenance clean with those absurd “warranty void if removed” stickers over a screw. Fortunately such things won’t stop me as I have the means to remove them without any signs but for most other people that’s just not possible.
With the limited bandwidth you have you probably wouldn’t even lose much, dropping to 1250MHz on the GPU. Given the power restraints you have you’d be much better served optimising bandwidth at stock speeds to keep all voltages as low as possible you really won’t be losing anything because the important thing is the bandwidth efficiency itself. Polaris is bandwidth starved, not frequency starved. Even when I ran an RX590 @ 1.6GHz / 9GHz with highly optimised Micron timings that netted me absolutely nothing outside of margin of error compared to one of my RX580s @ 1.4GHz / 8.8GHz with highly optimised Hynix timings. Without a memory upgrade to GDDR5X at those points Polaris simply had no more to give I’d think I was also bumping up against core performance limits too there’s only so much more you can get out of 2304 shaders and 36CUs.
None the less though a fully optimised 580/90 will give a Vega56 a run for it’s money a lot of the time and a GTX1070 too, that’s not bad at all even now it still firmly puts Polaris cards in potent 1080p territory.
Well, when VRM is limiting factor, raising voltage helps a little bit, i will do another modification to memory power and then no voltage raise will be needed.
These samsung chips should do 2250mhz without problems, and then depending on their quality maybe a hundred or two more.
That should be enough to get 40+ fps stable in furmark (now it’s 38). If it wasn’t for shaders crap, with full chip that’d be +14.3% - about 45fps.
Default what you get with this laptop is miserable 22-23fps so after hw/sw mods almost doubled up the power on GPU
@karmic_koala maybe we can work something out. I have an RX480 Nitro+ on the way which I ended up opting for because the power requirements pose an interesting challenge for me - what can I get out of that generation of GPU without regularly exceeding TDP limits. Between us with your experience of IGP and my experience with discreet solutions I think we could come up with the worlds ultimate Polaris GPU. I don’t know many details about the GPU yet just that it’s a Sapphire RX480 8GB so I’m going to estimate it’s a GPU with ~78% ASIC quality and (probably) Samsung -HC25 ICs, which are both the best and worst for discreet GPU solutions, wicked tight timings at stock speed, but blow harder than Alia Janine above stock frequency lol. You up for the challenge?
Yeah, of course. I’d expect around 80fps with that type of card, but let’s see first hw configuration once it arrives. Are you using liquid metal on GPUs ?
@karmic_koala Any full fledged Polaris GPU I work with I have even higher expectations The GPU needs to do 92-100FPS range in Metro Last Light Redux at maximum settings 1080p. Being a Nitro+ RX480 I’m not expecting the GPU to do better than 1.45GHz at best, hopefully the card will have a pretty high ASIC (72%~) to run a minimum 1.4GHz(ish) around 1.087v but I’m expecting more along the lines of 1.35GHz. I’ll also start with a VDDC of 0.9v (-10% from stock) with VDDCI also 0.9v. Unless there is something outside of what I’m seeing in software I noticed yesterday that any RX590 vBIOS I looked at ran VDDCI .050mv higher in 3D clocks, something earlier iterations like the 480 and 580 don’t do.
I don’t use liquid metal anymore because once the GPU goes into storage LM dries out removing it is a total bitch so I stick to extremely high performance paste, Halnziye HY-A9 which is rated for 11W/m-k. I’ve also got some replacement thermal pads as well rated for 6W/m-k which is plenty high enough for the memory. I might change the duty of the rear plate though, Sapphire tend to use a thermal pad across the back of the inductors which is fine but inductors, even crap ones, are designed to operate at 125c anyway, good ones 150c, so I don’t know why Sapphire are dumping all that heat into the back plate when it’s probably better to use the plate as extra cooling for the memory.
I think any component that can be cooled helps keep overall temps lower, the heat traveling through conduits just as electric current does.
And inductors ,when releasing current, create excessive amount of heat, some claim even higher than mosfets
@karmic_koala Ordinarily for other components yeah what you say is absolutely right but inductors by the very nature of them get so hot that any additional cooling they get from the rear plate won’t give them any particularly useful benefit because the rear plate surface area isn’t enough it just leads to a runaway thermal overload. Probably why so many Polaris cards with back plates have the “Caution! Plate becomes HOT!” or similarly worded warnings on. If the Polaris cards had say a 20-25% longer cooling solution that would have done wonders for preventing the inductors thermally overloading the rear plate. For me I always found a better way to keep inductors cool is just to not go beyond the critical 1 point, which simply put is when the silicone stops scaling linearly with voltage in a predictable way. Don’t break that rule and knocking 30c off of the inductors load temp is pretty easy to do.
I’m just going to wait and see on that side of things anyway I can’t remember exactly how Sapphire cool the memory with their HSF design, if it’s direct contact with the primary heatsink then I’ll probably just keep using the backplate for the inductors but I never was overly fond of an under load 470/80 or whatever literally being capable of giving someone serious burns if they touch the backplate.
Randomly, the GPU will hit higher voltage than allowed. This happens even with original unmodified bios.
For example, play a game for a while at 0.95V, then check hwinfo it reports max VDDC of about 1.075V, something causing spikes every now and then. Not sure if polaris have real voltage sensor or all the reports come from driver values being set.
When playing video, gfx offset voltage takes no effect. I read about this on oc net, they could fix it on rx580 iirc but the same method doesn’t work on rx460. Video decode works fine at quite low voltage, like 0.75V or 0.80V even though the core clock is 1200MHz.
@karmic_koala I never experimented much with lower GPU frequencies but I was thinking about doing that a bit more with the incoming Nitro+ 480. Unless you can take a DMM and measure voltages I wouldn’t worry too much about what HWinfo readings might momentarily have moderate spikes to as the software AFAIK doesn’t always take direct sensor readings but makes a “best guess”, probably based on a driver reported value.
If it’s not that I’d think the spike is likely happening due to GDDR5 power requirements which can be quite high, I can well imagine when the card switches power states that the GDDR momentarily goes to maximum voltage values (including PLL) before settling down to the correct ones for that state.
Outside of those scenarios a spike could happen is if there’s a bug with the voltage controller coding, which seems unlikely as I’ve seen Polaris cards with several different voltage controllers do exactly the same thing, or there’s a problem with the power delivery circuit sometimes causing a spike which could be happening if the phases aren’t being talked to properly or a number of other fairly obscure reasons. Lot’s of theory, but no evidence for any of it. The only certainty is that if those spikes are legitimately happening and they aren’t software errata then that means the card is doing something it shouldn’t on a hardware level. But what? The card asks for power based on programmed settings, and because the PSU knows no better it will do the best it can to facilitate that request until something like OCP is triggered.
It might be worth looking at the VDDC LLC slope, if there is an error there or it is too aggressive then a spike will occur due to overshoot. Could be worth looking at VDDCI as well on 400 series cards I have seen entries of 2v for some insane reason
Uploading this for reference for my own experiments. Sapphire RX470 @ 1.32GHz 1.074v, 2050MHz memory and custom timings. Going to have one last play with it before the RX480 arrives to see if I can crack the mystery of Samsung memory being shitty overclockers. Results are with Basemark GPU for anyone wondering, and yes, pre 22.7.1 AMDs OpenGL driver portion was THAT bad.
You know what? I tried to reproduce this multiple times this evening, and Vddc never exceeded set voltage+LLC/offset, either in dynamic or static voltage mode.
Can’t really understand why it sometimes happens. It was happening before memory/GPU replacement as well.
LLC could be suspect, that thing behaves strange as far as i understand. Always set 0.95V and there are times when it will pull it down to 0.938V, other times it will go up to 0.962V
I’ll pay attention to this and mention it again if noticed
I’ve read something about opengl in new drivers, but nobody mentioned anything like this.
You are getting double the scores with new driver?
I can’t use official driver anymore due to APU graphics moved to legacy, managed to install some amernime version, but there was nothing special about it as far as i remember.
Voltage stability is REALLY difficult to get absolutely perfect, it’ll basically never happen especially on high power devices like a GPU, with things like that +/- 12mv or so is actually really, really good. Switching frequency can also play a role but the higher that is the more heat generated so 50/50 and the optimal frequency will depend on the components used and in a word, yup, FPS in OpenGL has literally doubled since AMD fixed their OGL driver portion… though the skeptic in me thinks they might just be using a DX wrapper to intercept the calls and translate them, much like WineD3D, only in reverse
Alright, I think I’m probably done squeaking what I can out of the RX470 8GB I have, it absolutely refuses to go higher than 2050MHz on the memory without starting to chuck out EDCs. I did end up forgetting to increase TDC though to see if a bit of brute force helped any. That’s not to say I didn’t have a measure of success though, I added +25mv offset voltage because without it the card has always been held back, with that offset GPU frequency comfortably went up to 1.35GHz and sticking with 1.32GHz because I’m not fighting droop now voltage can be reduced from 1.074v to 1.024v which has the actual voltage hovering around 1.038-1.044v. With vcore sorted out I figured I’d try reducing VDDC to 0.85v (default is 1v) and for the hell of it increase VDDCI from 0.85v to 0.9v. Aside from power saving I doubt this did anything but I did apply a tiny tweak to the memory timings which has allowed me to squeak a consistent ever so slightly higher score out of the card.
I’d say that’s not too shabby considering those results are on an FX6300 system @ 3.5GHz, 16GB DDR3 @ 2133MHz and an Asrock 990FX Extreme4 so the card has to make do with PCI-e 2.0 x16 which shouldn’t really bottleneck the card but there might be an ever such a tiny bit of bottlenecking going on.
@karmic_koala just been notified that the RX480 is due to arrive today… it’s just arrived as I’m typing this, the poor thing is pretty beat up but was cheap so I’ll be doing TLC work on it this evening to get it back in shape then we can take it for a spin to see what it can do, my aim is to get it performing around the level as my old ultra tuned 580s and 590s, possibly even a little better where I’ve picked up a few more tricks since then. Couple images for easy reference from the review I wrote on the XFX 590;
The synthetic results for the 580 are from one of my fully tricked out ones and the 590 results weren’t the absolute best I ended up getting from the card but pretty close. It’ll be interesting to see what the 480 can do.