I think it must be vBIOS. As no polaris card comes with better samsung memory than K4G80325FB-HC25 by default, that was probably the best one you could test (unless you replace all the modules yourself), 8ghz, or better said 8gbps and capped at 2050mhz as you say.
As i destroyed hynix memory and had to put new one i went for fastest available in the market, K4G80325FB-HC22, 9gbps, but still capped at 2050mhz.
I will dig into vBIOS, one way or another, I need to, since also resizable bar is not enabled, and 896SP looks like locked 1024SP, but not in a standard way. On the russian site, they suggest, replace entire vBIOS for all stream processors unlock, obviously single byte change wonât do shit
@karmic_koala revisions can be made to memory ICs with no reference to such changes usually itâs a silicone adjustment but Iâd be inclined to side more with it being something changed in the vBIOS but I have no idea what Iâve not been able to identify anything significant. Timings also are not the cause I have exhaustively looked into that it is some kind of hard limit stopping Samsung Ics going above around 2050MHz stable. ICs I worked with were rated for 8gbps, or 8GHz, whatever you prefer, so HC25s. If I was to go with an educated guess what the issue with Samsung memory is it would be some deep rooted register value or something like that as Samsung as you know produce 9gbps ICs but any Polaris vBIOS you look at that supports Samsung does not have a 2250MHz strap, which is odd, very odd, next to the likes of Hynix and Micron that do have 2250MHz straps even though they donât need them due to the card only using 8gbps ICs.
ReBAR will only work with CPUs that are at least of the 3000 series desktop or 4000 series mobile so even forcing it on likely wonât do anything if you donât have a CPU from those generations or newer. FYI GPUs are also laser cut so just replacing the firmware wonât enable the extra SPs unless the hardware itself is from a batch where the dies werenât laser cut for some reason.
The reason there is no 2250 samsung strap on polaris is simply because there is no card manufactured with HC22 memory, none of the reviewers ever came across one, or users added vbios on techpowerup.
If there is, very few people know about it.
Hynix did produce 9 and even 10gbps ICs, but they never made it into market for some reason.
@Kuri0 said the CPUs shouldnât pose limitation when trying to get rebar working, mainly motherboards and their BIOS need patching
GPU was also changed - taken from GV-RX460WF2OC-2GD card - there were several reports of successful unlock of that chip (215-0895088) one of them differing in date code by just few weeks from the chip i soldered.
Worth mentioning that both chips show same ASIC Quality which appears to be pretty much wrong.
What % would a chip needing 0.95V to run 1200Mhz have?
Theres no card manufactured with 9gbps hynix, micron, or elpida either but there is still a 2250MHz strap for hynix and micron, not sure about elpida but if I remember right elpida are a division of micron but their crap stuff so elpida likely just use the micron 2250MHz strap so that still leaves samsung as the odd one out as they all manufactured 9gbps ICs just no polaris card used 9gbps ICs from any of the manufacturers. Despite that though hynic, micron, and lets say elpida too, all have a 2250MHz strap.
@karmic_koala yeah Iâm able to use full rebar size (8GB) on my i5 3470 along with many others also using it, and it actually gives a ~10% fps boost (certain dx12 games) once enabled in the amd driver with registry edit for it to function on polaris gpus.
@karmic_koala Wait do you have an intel or AMD CPU?, We talking SAM (AMD) or ReBAR (Intel)? Both are the same thing and everyone just calls it ReBAR now but because AMD have added confusion by calling it SAM I always forget to ask if the system is intel or AMD as both are the same thing but functionality AFAIK does nothing for AMD systems if you donât have at least a 3000 series (desktop) or 4000 series (mobile) CPU. Intel CPUs I think ReBAR will work even all the way back to the 2600K but you might need to hack it to work because intel.
Iâm using an Intel i5 3470 with RX 580 (KMD_EnableReBarForLegacyASIC enabled) and Radeon Software/GPU-Z shows ReBar is working fine.
Older Ryzen CPUs (Zen1/Zen+) can also use SAM if the motherboard allows it which Gigabyte doesnât. But recently users have figured out that Gigabyte AM4 motherboards can be modded the same way as older motherboards (ReBarUEFI) to get SAM working on them with Zen/Zen+ CPU.
Oldest Intel CPU tested is a 2600k or similar yeah, UEFI motherboards for older sockets exist but are rare so no one has tested them yet.
All Ryzen CPUs support SAM/ReBar in hardware, donât know about older AMD FX / A Series though because no one has tested.
the only requirement for fully functioning rebar is an Uefi bios with above 4g decoding in it, so yeah if the board got above 4g decoding option even if its hidden, and if its enabled it wont cause trouble, then you should be able to use rebar with rebaruefimod.
Pretty sure, once again, all the other cards had weak VRM.
Remember i wrote once about seeing someoneâs hwinfo with 100W on VDDCI and how ridiculous it seemed. That must have been older generation of cards where they wired VDDCI to memory.
âby AMDâs estimate 15-20% of Radeon R9 290Xâs (250W TDP) power consumption is for memoryâ
This was just an estimate, probably referring to factory clocks. An overclocker threw double of that power at memory.
Top google search result says 2.5W per module. Fuck that shit. For GDDR5 in high power mode it is more likely to be 10W per module.
With 10A VRM my system could hardly do 1900mhz, now with 15A VRM it maxed out at 2080mhz.
So, donât look any further unless you are able to rule out VRM
@Kuri0 Iâm well versed with Gigabyte and their shitty boards, for a long time I released modified X370 firmwares for the chipset. While Gigabyte hide it because they suck (or did hide it, not looked at many of their X570 firmwares), the boards do actually have a 4G Decoding option. My backup system is an Asrock 990FX with a FX6300 and 8GB RX470 so in theory I could test ReBar for FX CPUs but the Asrock firmware for the board I have, a 990FX Extreme4, is pretty damn flaky so not the best candidate.
@karmic_koala I can rule out the VRM I only tested cards with high quality VRMs to specifically rule problems like that out, and why I warned everyone away from the crappy Powercolor and Gigabyte RX480, hell, just avoid any Polaris GB card they pretty much all suck for one reason or another.
VRM may be high quality, but still with limited TDC to the levels insufficient for overclock. Weâd need to measure consumption somehow to be 100% sure.
I remember i read before polaris 8GB cards take around 45W on memory (real review where they did measure it, not just estimation), 4GB will be approximately half of that 20-25W.
So, 15A multiplied by voltage set, 1.62V in my case equals 24W - looks like just enough for default clock of 2000mhz, maybe a little bit more, which is exactly what i do get.
VRM is in PFM mode, not sure if itâs worth forcing it into PWM mode
@karmic_koala The cards I worked with have all had from good to excellent TDC so it shouldnât have been a problem. I found out quickly that the memory controller voltage for discrete Polaris cards at least does have an impact on maximum stable memory OC most of the time, the 580s and 90s I tested the memory controller voltage I could reduce to 0.9v, even 0.85v and at the very worst it just reduced power consumption a bit, and at best lowering the voltage pushed me to 9gbps. Damn I wish I didnât sell that particular RX580 that was an amazing little card by the time I fully tuned it, it used at most about 200w @ 4K running Metro Last Light @ max settings and <150w @ 1080p, even OCd to 1.4GHz / 9GHz with awesome memory timings. I might see if I can pick up another Polaris card for cheap sometime to play around with again.
If you rely on TDC value in vBIOS and driver utilities, that doesnât matter because memory power is provided by separate VRM which usually isnât software controllable.
And wrong terminology (common in the gpu tweakers web posts) regarding âmemory controllerâ only adds up to the confusion.
However, when you say voltage 0.9 and 0.85v i can see you mean VDDCI. Still thatâs only PCIex, PLL, PHY voltage, not directly related to memory controller which shares power with memory chips and will have 1.35v nominal.
I went through this while replacing GPU and is only logical conclusion i could make. See photo
Now, regarding VDDCI, and âmythâ as you say in first post, Iâve also seen that.
Running too much undervolted static VDDCI (0.75v) did cause system crashes when big load insertion occurs.
Running dynamic VDDCI with 0.7v at 300mhz and 1000mhz memory clock, 0.75v at 1500mhz, 0.80v at 1800mhz (these were powerplay values) still makes the driver do scaling automatically.
At 1350mhz core 2050mhz memory it scaled to 0.89v and remained the same when upped the memory to 2070 or 2080 mhz. At 2090mhz it was always crashing.
Then i tried static VDDCI of 1v, now the benchmarks started with memory at 2090 or 2100mhz, but at 2110 mhz it crashed again.
So going +100mv on this line to be able to run memory 20mhz faster is clear enough evidence this isnât the issue. And as you say best to keep VDDCI at defaults, or undervolt it slightly if possible.
Yea I was talking VDDCI, dunno why I was referring to it as memory controller voltage. Brainfart moment. For reference the actual memory controller itself should not exceed +10% of stock voltage or you will fry it, especially on discrete Polaris cards, Iâve seen a lot of cases of that happening and inquiring through some back channels I had it confirmed by an AMD engineer, something to do with how the voltage is also linked to the voltage for the rear IO if I remember correctly.
Actually, it is 1.5 ±3% according to databook. Stock voltage can be card dependent, e.g. 1.35, 1.4 etc.
Max safe value should be 1.6v. Iâve had it before at 1.65v and there was no problems.
Higher the voltage, higher the amount of heat from VRM and greater risk of crashes due to instabilities
The memory voltage Iâm talking about is Vdisp, aka voltage display, or voltage provided to the display ports via the memory controller. From what you say Iâm pretty sure you are talking about the internal PLL voltage which is different. Vdisp can sometimes help fix black screens when tightening memory timings and is the voltage which should not exceed +10% of stock voltage however I found stock voltage to be somewhat of an overvolt 400 and 500 series discrete Polaris cards this can be reduced by 0.05v minimum, with 500 series cards Iâve run it a further 0.1v lower without any issues, same OC and memory timings. Iâm pretty sure the guy on actually hardcore overclocking raised the PLL voltage to something like 1.6v and killed his Polaris card where he murdered the circuit, so yeah, I would be proceeding very carefully doing that. Of particular interest though is apparently Raja was quoted as saying Polaris is compatible with GDDR5X, so 10GBps memory in theory isnât out of the question, and if such a card ever existed it likely would have been faster than Vega 56/64 most of the time, hell, I got some 580s and 90s to be faster than some Vega cards so a Polaris card faster than Vega would have been very cool.
Not sure what to tell you. Maybe you are referring to 5700M cards, there it might be every of these voltages have separate VRM, but on 3000M desktop cards it is usually as on the photo i posted above.
Core I/O is separated from PLL, PCI, display voltages (in theory all 4 of these could be separated if the system designer wished so) while on mobile systems all of them are joined in a single rail VDDCI.
As this system is running Polaris as a headless card and display is connected to CPU R7 graphics there is no need to adjust display voltage.
I did compare tables of several BIOS files. When changing memory clock values in PBE it is also a good idea to update the values in TMDSAEncoderControl table. In this example there are 3 mem clk profiles, highlighted is 24F4 = 62500 10khz units, then some bytes that define it as a profile 03523F020500, next is 98AB02 = 175000 10khz units. I use 4 mem clk profiles which is max the firmware supports.
When changing size of this table value at offset 47 must indicate starting offset of last 4 bytes of the table - 18100804
Next difference i found in sapphire rx460 bios ComputeMemoryEnginePLL table. 3 other bios have C8 value at offset 74, only sapphire have 32.
Highlighted this value in atomdis tool as well. Looks like a register address
Update: it is spread spectrum value, also present in data table ASIC_InternalSS_Info. Might help for better stability (probably wonât).
Of all polaris modders out there, nobody ever came across MC microcode?
It is located at offset 38000 (didnât really looked into this till now as i thought itâs GOP related) in desktop card dumps, referenced in ATOM_MC_INIT_PARAM_TABLE_V2_1 MCUcodeRomStartAddr, however itâs length that is matching length at 3800C doesnât correspond to entire data length at 38000.
Laptop BIOS donât have this microcode as itâs 60KB in size, it looks like microcode has been put into MemoryTrainingInfo table, which is non-present on desktop cards
Old tools like atomdis, atombiosreader etc. canât dump all the polaris tables. I think PBE-xml lists these, but if not they can be found manually by going through driver sources.
CMD table GetVoltageInfo (5250) and DATA table ServiceInfo (6622).
There is no any info on ServiceInfo table(checked against 5.6.14 sources, maybe i missed it or need newer kernel), and it doesnât appear to be loaded, still some BIOS have populated this table, others have it empty but still presentâŠ
And some ugly shit going on here: removed the TV1OutputControl table from BIOS but forgot to remove code from ASICInit that calls this table - GPU still loaded fine without any kind of crash so how the fuck is this even possible.
Any other modification with missing/incorrect byte at such a critical point would result in BIOS crash, here it just missed entire table and it was still fine