AMD mentioned Monday night at Computex 2021 that it has developed its chiplet structure into 3D chiplets, particularly what it calls 3D V-Cache know-how. By itself, the know-how guarantees efficiency good points for its Ryzen and Epyc processors that may contact 25 p.c.
AMD has made “nice progress” on its 3D chiplet know-how, and may debut it in its “highest-end merchandise” by the tip of 2021, AMD chief govt Dr. Lisa Su mentioned throughout a Computex keynote tackle.3D V-Cache permits AMD to take a Ryzen 5000 cell processor and join a 64MB SRAM cache straight on prime of it.
Su confirmed off a Ryzen 9 5900X, AMD’s quickest gaming CPU, after which in contrast it to a prototype 5900X with 3D V-Cache hooked up to it. In Xbox Sport Studios’ Gears 5, body charges improved by 12 p.c. In different video games, additionally utilizing an identically clocked 5900X, efficiency elevated 4 to 25 p.c, for a median of 14 p.c, Su mentioned.
AMD’s Dr. Lisa Su supplied benchmarks displaying how the prototype 3D V-Cache can dramatically enhance gaming efficiency.
Stacked chips sound like the long run
This will all sound acquainted. In 2018, Intel started displaying off how its Foveros know-how allowed for stacking its CPU logic on prime of each other. That allowed Intel to create the short-lived Lakefield processor, but in addition the upcoming, higher-performing Alder Lake chip, which Intel displayed at Computex in each desktop and cell variations.
AMD CEO Dr. Lisa Su explains 3D V-Cache in her Computex presentation.
Based on Tirias Analysis analyst Kevin Krewell, nevertheless, AMD’s implementation of 3D stacking know-how is completely different. AMD is utilizing through-silicon through know-how from TSMC, its foundry accomplice, just like the know-how utilized by reminiscence makers to stack DRAM and NAND flash on prime of each other. It has higher energy and bandwidth traits than Foveros, but it surely’s unknown how effectively it may be manufactured.
“AMD is utilizing the tech to get an additional ~12 p.c efficiency increase for its CPUs by including extra L3 cache,” Krewell mentioned through prompt message. “This know-how will also be utilized in EPYC servers.”
Putting a big cache straight adjoining to the CPU can have vital efficiency benefits. Processors must ask for directions, and storing them in an simply accessible cache—versus in search of them out in system reminiscence—could be a simple technique to enhance system efficiency. However baking that cache into the processor die introduces extra alternative for chip defects. In a worst-case situation, your entire chip could possibly be rendered ineffective.
Including the cache as a separate die after which stacking it saves area and price, whereas nonetheless sustaining the bandwidth benefit and including gobs of accessible cache. For instance, Su mentioned its prototype soldered SRAM to every AMD CCD, for a complete of 192MB of SRAM cache. The obtainable level-3 cache on at present’s 5900X is simply 64MB, or a 3rd of the 3D V-Cache prototype.
AMD’s Su holds up the prototype 3D V-Cache chip.
Following AMD’s multichip modules and chiplet method, Su known as 3D chiplets the “subsequent large step ahead.” Putting the extra silicon on prime of the CPU die triples the obtainable cache. TSMC’s through-silicon vias (or on-chip “wires”) enable the CPU and cache to speak to 1 one other at greater than 2 TBps of bandwidth, Su mentioned. Su additionally mentioned that AMD’s die-to-die method makes use of direct copper connections, not solder bumps. That’s an oblique criticism of Intel’s Foveros method, which makes use of microbumps and thus consumes extra energy and offers much less bandwidth, Krewell famous.
For now, that additionally implies that AMD can’t simply reproduce Intel’s Alder Lake method. Alternatively, it won’t matter, as AMD can understand efficiency good points in each its Ryzen and Epyc processors, and presumably its GPUs. Now, the query is: Which of AMD’s chips shall be blessed with 3D V-Cache?
Source link