Pagine

Article written by Federico Barutto

 

Another year, another AMD GPU, another efficiency article. As every recent AMD GPUs, RX5700XTs are set with their GPU voltage too high.
Why every single one is like that? Too little time to fine-tune BIOSes? Bad 7nm yields? Too little budget? That was, is and will always be a mystery. This behaviour is going to hurt AMD, as Radeon GPUs will always be tagged as the powerful but inefficient cards, and in a market dominated by “the green ones” that’s going to make things even worse. 

rx5700xt nitro home

But that’s good news for the tweakers like me! As Vega GPUs (in a lesser way, though), we can easily enhance both temperatures and power draw.
Let’s remember that custom RTX2060s draw around 175W (as reported by GPU-Z). The goal is to be as close as that number as possible, but at the same time keep the same good, RTX2070 Super-like (or even 2080-like, in DirectX 12 and Vulkan games) or 2070-like (in older DirectX11 games) performance of stock Nitro+ cards, to enhance the card’s efficiency.

Differently from Vega chips, core voltage isn’t affected by the so-called “memory voltage” (which in reality is more like the lowest core voltage possible), so undervolting is easier.
The biggest difference from the previous Vega test is that the platform isn’t my watercooled gaming PC anymore, but it’s its air-cooled “little brother” which I built for a friend. Another big difference is the card: it isn’t a reference one with a waterblock, but it’s the best air-cooled 5700XT on the market, the mighty Sapphire RX5700XT Nitro+, with 3 fans. 

You may be interested in:

 


Test rig

CPU AMD Ryzen 5 2600X
CPU Cooler
Arctic Freezer 34 eSports Duo
Fans 3 x Arctic F12 PWM + 1 x Stock CoolerMaster
Mainboard Asus Prime X370-Pro
Bios 5220 (AGESA 1.0.0.3ABBA)
DDR 2x8GB DDR4-3200 G.Skill Ripjaws
SSD
SSD M.2 Crucial MX500 1TB
HDD Toshiba DT01ACA200 2TB
Graphic Card
Sapphire RX5700XT Nitro+ 8GB
PSU Sharkoon SilentStorm IceWind 650W
Case Cooler Master MasterBox 5
Operating System
Windows 10 Professional 1909
Driver Chipset AMD 1.11.22.0454
Driver Video Crimson 19.12.2

 

PC fans and pump are always at 100%, Windows is set to High performance mode. All games and benchmarks are installed on the SSD. There’s no antivirus. Everything is updated to the last version.

 

As to eliminate as many variables as possible from the tests, all fans (case, CPU cooler and 5700XT) have been set to the max. The undervolt is done on Wattman (from 19.2.2 drivers, the last ones during tests). Latest Windows 10 1909 version has been installed from scratch on the SSD. Every driver, software and game has been updated to the latest version. The antivirus has been disabled. Windows 10 has also been set with “Maximum performance” power plan. Power draws, frequencies and temperatures have been captured with GPU-Z. Every test was repeated 3 times, if some results were abnormal that test was discarded and then repeated again.
I used two settings: Stock and UV+MEM (2000MHz-1060mV core, 1880MHz GDDR6).

Every game test has been run at 2560x1440 Virtual Super Resolution (VSR, which means rendering at 2560x1440and then downscaling to Full HD) with every setting maxed out except motion blur (disabled for personal preferences).
This 5700XT sample wasn’t neither a great undervolter (many of them are stable at 1000-1010mV) nor a great memory overclocker (again, many of them are stable with the maxed-out slider in Wattman).
Unfortunatley, since there wasn’t enough time and I was constrained by the games my friend wanted on his PC, I chose a synthetic benchmark (Unigine Superposition) and 3 games (Far Cry 5, Forza Horizon 4 and Tomb Raider 2013. Unfortunately, from a test point of view, all games are optimized from Radeon cards (especially Forza Horizon 4). I’d have preferred some more variety though…


 

Test Score (Pts) Min (FPS) Avg (FPS) Max (FPS) Freq (MHz) Power (Watt) Core (°C) Hotspot (°C)
Superposition stock 5228 31 39 46 1930 215 55 75
Superposition UV+mem 5270 32 40 46,5 1940 170 46 60
% 1% 3% 3% 0% 0% -21% -16% -20%
FC5 Stock 5300 74 90 109 1950 220 51 66
FC5 UV+mem 5660 80 96 112 1950 175 45 59
% 7% 8% 7% 3% 0% -20% -12% -11%
FH4 Stock - 73 89 112 1930 190 45 65
FH4 UV+mem - 75 91 115 1940 145 43 52
% - 3% 2% 3% 1% -24% -4% -20%
TR 2013 Stock - 48 69 90 1900 215 53 75
TR 2013 UV+mem - 49 70 95 1950 180 48 65
% - 2% 1% 6% 3% -16% -9% -13%

 

Let’s start with Unigine Superposition. It’s the first and only benchmark which uses the Unigine 2 engine (DX11), famous like its predecessor for not being liked much by the Radeon+Ryzen couple. With the stock settings, the exaggerated core voltage can clearly be noticed looking at power draw and temperatures (especially the hotspot one). With the UV+OC, despite a near-zero performance increase, power and hotspot temp take both a two-figures decrease. -20% for both is definitely not an insignificant result! Lower but still big is the decrease of core temp (-16%), the largest of all the tests.

The next one is Far Cry 5, using Dunia 2 engine (DX11). Being a Radeon+Ryzen optimized game, it’s obvious that it runs very well, always (well) over 60FPS. It’s simultaneously the game with the worst (stock) power draw and with the best results after the UV+OC. Performance is in fact increased by 7%, and power draw decreased by 20%. Core and hotspot temps decreased by a smaller amount (-12%), still noticeable though.

Microsoft’s top dog arcade racing game, Forza Horizon 4, is the next one. It’s based on the latest ForzaTech engine (DX12), and it’s among the best running games on Radeon cards. Obviously, it runs fabulously, with FPS always well over 60, As its predecessor with my Vega 64, it’s the game with the least power draw, and it’s even with Superposition as hotspot temp decrease. -24% power draw (a quarter less!) and -20% hotspot temp is an astonishing result! By having the less power draw, it also has the lowest core temps, stock or after UV+OC, and with the smallest core temp decrease (-4%).

Last and oldest game, the first of the new Tomb Raider reboot trilogy, Tomb Raider 2013. It’s based on Crystal Engine (DX11). It’s the first game to support AMD’s realistic hair rendering technology, TressFX, later updated in more recentTomb Raider games (PureHair), and used in very few games because it’s very intensive. The heaviness of this first version of it is noticeable on minimum FPS, around 45, despite average and maximum ones are over 60. While it’s the oldest game, it’s the one that taxes the GPU the most: stock frequency is the lowest (1900MHz), and has the least power draw reduction (-16%). Performance increase is small on minimum and average FPS (+1-2%), the most important ones, and bigger on maximum ones (+6%). Of course, core and hotspot temps decrease, less than other games though (-9% and -13%).


The goal has been reached: power draw is around the same as RTX2060 cards (175W), but performance increased between 1 and 5%. Hotspot temperatures are also quite a bit lower, and that’s good for Navi chips lifespan.
As those tests show, AMD must enhance its core voltage management, to not make things worse regarding efficiency. OK, 7nm are new and in their first generation, but it’s only a partial excuse. Turing GPUs are “enormous” because they have ray tracing cores (not very important at the moment, in my opinion) and are made with super-tested and optimized 16nm, but they’re much more efficient…
We only have to hope that, with 7nm+, better yields and RDNA2, AMD will lower their always-too-high stock voltages.