AMD EPYC Milan Review Part 2: Testing 8 to 64 Cores in a Production Platform
by Andrei Frumusanu on June 25, 2021 9:30 AM ESTAMD Platform vs GIGABYTE: IO Power Overhead Gone
Starting off with the big change for toady’s review: the new production-grade GIGABYTE Milan compatible test platform.
In our original review of Milan, we had initially discovered that AMD’s newest generation chips had one large glass jaw: the platform’s extremely high idle package power behaviour exceeding 100W. This was a notable regression compared to what we saw on Rome, and we deemed it as a core cause as of why Milan was seeing some performance regressions in certain workloads compared to the predecessor Rome SKUs.
We had communicated our findings and worries to AMD prior to the review publishing, but never root-caused the issue, and never were able to confirm whether this was the intended behaviour of the new Milan chips or not. We theorized that it was a side-effect of the new sIOD which had the infinity fabric running at a higher frequency, which this generation runs in 1:1 mode with the memory controller clocks.
To our surprise, when setting up the new GIGABYTE system, we found out that this behaviour of extremely high idle power was not being exhibited on the new test platform.
Indeed, instead of the 100W idle figures as we had tested on the Daytona system, we’re now seeing figures that are pretty much in line with AMD’s Rome system, at around 65-72W. The biggest discrepancy was found in the 75F3 part, which now idles 39W less than on the Daytona system.
Milan Power Efficiency | |||||||||||||
SKU | EPYC 7763 (Milan) |
||||||||||||
Motherboard/ Platform |
Daytona | GIGABYTE | |||||||||||
TDP Setting | 280W |
||||||||||||
Perf |
PKG (W) |
Core (W) |
Perf | PKG (W) |
Core (W) |
||||||||
500.perlbench_r | 281 | 274 | 166 | 317 | 282 | 195 | |||||||
502.gcc_r | 262 | 262 | 131 | 271 | 265 | 150 | |||||||
505.mcf_r | 155 | 252 | 115 | 158 | 252 | 132 | |||||||
520.omnetpp_r | 142 | 249 | 120 | 144 | 244 | 133 | |||||||
523.xalancbmk_r | 181 | 261 | 131 | 195 | 266 | 152 | |||||||
525.x264_r | 602 | 279 | 172 | 641 | 283 | 196 | |||||||
531.deepsjeng_r | 262 | 267 | 161 | 296 | 283 | 196 | |||||||
541.leela_r | 267 | 249 | 148 | 303 | 274 | 199 | |||||||
548.exchange2_r | 487 | 274 | 176 | 543 | 262 | 202 | |||||||
557.xz_r | 190 | 260 | 141 | 206 | 272 | 171 | |||||||
SPECint2017 | 255 | 260 | 141 | 275 | 265 | 164 | |||||||
kJ Total | 2029 | 1932 | |||||||||||
Score / W | 0.980 | 1.037 | |||||||||||
503.bwaves_r | 354 | 226 | 90 | 362 | 218 | 99 | |||||||
507.cactuBSSN_r | 222 | 278 | 150 | 229 | 285 | 174 | |||||||
508.namd_r | 282 | 279 | 176 | 280 | 260 | 193 | |||||||
510.parest_r | 153 | 256 | 119 | 162 | 259 | 138 | |||||||
511.povray_r | 348 | 275 | 176 | 387 | 255 | 193 | |||||||
519.lbm_r | 39 | 219 | 84 | 40 | 210 | 92 | |||||||
526.blender_r | 372 | 276 | 165 | 396 | 282 | 188 | |||||||
527.cam4_r | 399 | 278 | 147 | 417 | 285 | 170 | |||||||
538.imagick_r | 446 | 278 | 178 | 471 | 268 | 200 | |||||||
544.nab_r | 259 | 278 | 175 | 275 | 282 | 198 | |||||||
549.fotonik3d_r | 110 | 220 | 86 | 113 | 215 | 95 | |||||||
554.roms_r | 88 | 243 | 106 | 89 | 241 | 119 | |||||||
SPECfp2017 | 211 | 240 | 110 | 220 | 235 | 123 | |||||||
kJ Total | 4980 | 4716 | |||||||||||
Score / W | 0.879 | 0.9361 |
A more detailed power analysis of the EPYC 7763 during our SPEC2017 runs confirms the change in the power behaviour. Although the total average package power hasn’t changed much between the systems, in the integer suite now 5W higher at 265W vs 260W, and in the FP suite now 5W lower at 235W vs 240W, what more significantly changes is the core power allocation which is now much higher on the GIGABYTE system.
In core-bound workloads with little memory pressure, such as 541.leela_r, the core power of the EPYC 7763 went up from 148W to 199W, a +51W increase or +34%. Naturally because of this core power increase, there’s also a corresponding large performance increase of +13.3%.
The behaviour change doesn’t apply to every workload, memory-heavy workloads such as 519.lbm don’t see much of a change in power behaviour, and only showcase a small performance boost.
Reviewing the performance differences between the original Daytona system tested figures and the new GIGABYTE motherboard test-runs, we’re seeing some significant performance boosts across the board, with many 10-13% increases in compute bound and core-power bound workloads.
These figures are significant enough that they do change the overall verdict of those SKUs, and they also change the tone of our final review verdict on Milan, as evidently the one weakness the new generation had was actually not a design mishap, but actually was an issue with the Daytona system. It explains a lot of the more lacklustre performance increases of Milan vs Rome, and we’re happy that this was ultimately not an issue for production-grade platforms.
As a note, because we also have the 4-chiplet EPYC 7443 and EPYC 7343 SKUs in-house now, we also measured the platform idle power of those units, which came in at 50 and 52W. This is actually quite a bit below the 65-75W of the 8-chiplet 7763, 75F3 and 72F3 parts, which indicates that this power behaviour isn’t solely internal to the sIOD chiplet, but actually part of the sIOD and CCD interfaces, or as well the CCD L3 cache power.
58 Comments
View All Comments
Threska - Sunday, June 27, 2021 - link
Seems the only thing blunted is the economics of throwing more hardware at the problem. Actual technical development has taken off because all the chip-makers have multiple customers across many domains. That's why Anandtech and others are able to have articles like they have.tygrus - Sunday, June 27, 2021 - link
Reminds me of the inn keeper from Les Miserables. Nice to your face with lots of good promises but then tries to squeeze more money out of the customer at every turn.tygrus - Sunday, June 27, 2021 - link
I was ofcourse referring to the SW not the CPU.130rne - Tuesday, September 14, 2021 - link
What the hell did I just read? Just came across this, I had no idea the enterprise side was this fucked. They are scalping the ungodly dog shit out of their own customers. So you obviously can't duplicate their software in house meaning you're forced to use their software to be competitive, that seems to be the gist. So I buy a stronger cpu, usually a newer model, yeah? And it's more power efficient, and I restrict the software to a certain number of threads on those cpus, they'll just switch the pricing model because I have a better processor. This would incentivize me to buy cheaper processors with less threads, yeah? Buy only what I need.130rne - Tuesday, September 14, 2021 - link
Continued- basically gimping my own business, do I have that right? Yes? Ok cool, just making sure.eachus - Thursday, July 15, 2021 - link
There is a compelling use case that builders of military systems will be aware of. If you have an in-memory database and need real-time performance, this is your chip. Real-time doesn't mean really fast, it means that the performance of any command will finish within a specified time. So copy the database on initialization into the L3 cache, and assuming the process is handing the data to another computer for further processing, the data will stay in the cache. (Writes, of course, will go to main memory as well, but that's fine. You shouldn't be doing many writes, and again the time will be predictable--just longer.)I've been retired for over a decade now, so I don't have any knowledge of systems currently being developed.
Who would use a system like this? A good example would be a radar recognition and countermeasures database. The fighter (or other aircraft) needs that data within milliseconds, microseconds is better.
hobbified - Thursday, August 19, 2021 - link
At the time I was involved in that (~2010) it was per-core, with multiple cores on a package counting as "half a CPU" — that is, 1 core = 1CPU license, two 1-core packages = 2CPU license, one 2-core package = 1CPU license, 4 cores total = 2CPU license, etc.I'm told they do things in a completely different (but no less money-hungry) way these days.
lemurbutton - Friday, June 25, 2021 - link
Can we get some metrics on $/performance as well as power/performance? I think the Altra part would be better value there.schujj07 - Friday, June 25, 2021 - link
"Database workloads are admittedly still AMD’s weakness here, but in every other scenario, it’s clear which is the better value proposition." I find this conclusion a bit odd. In MultiJVM max-jOPS the 2S 24c 7443 has ~70% the performance of the 2S 40c 8380 (SNC1 best result) despite having 60% the cores of the 8380. In the critical-jOPS the 7443's performance is between the 8380's SNC1 & SNC2 results despite the core disadvantage. To me that means that the DB performance of the Epyc isn't a weakness.I have personally run the SAP HANA PRD performance test on Epyc 7302's & 7401's. Both CPUs passed the SAP HANA PRD performance test requirements on ESXi 6.7 U3. However, I do not have scores from Intel based hosts for comparison of scores.
schujj07 - Friday, June 25, 2021 - link
The DB conclusion also contradicts what I have read on other sites. https://www.servethehome.com/amd-epyc-7763-review-... Look at the MariaDB numbers for explanation of what is being analyzed. Their 32c Epyc &543p vs Xeon 6314U is also a nice core count vs core count comparison. https://www.servethehome.com/intel-xeon-gold-6314u... In that the Epyc is ~20%+ faster in Maria than the Xeon.