AMD seems to have the upper hand compared to Nvidia when it comes to memory latency performance with current-gen graphics cards.

This conclusion was drawn by tech site Chips and Cheese which ran comparisons between Big Navi and Ampere, using pointer chasing benchmarks (written in OpenCL) to assess memory latency in nanoseconds (ns) with AMDs RX 6800 XT pitted against Nvidias RTX 3090.

And the results were interesting to say the least, when you consider that the Big Navi card has more cache layers to travel through on the journey to memory yet it manages to match the Ampere GPU.

Nvidia runs with a straightforward scheme of two levels of cache L1 and L2 cache being the tiny amounts of very fast on-board storage (much faster than the actual video RAM) right there on the chip (just as is the case with CPUs).

With Big Navi, however, AMD does things very differently, using multiple layers of cache: L0, L1, L2 and Infinity Cache (which is effectively L3). Meaning there are more levels to pass through, as we mentioned, so youd think that this could slow things down more in terms of latency but thats not the case.

In fact, AMDs cache is impressively quick indeed, with latency being low through all those multiple layers, whereas Nvidia has a high L2 latency 100ns, compared to around 66ns for AMD from L0 to L2. The upshot of all this is that the overall result is pretty much a dead heat.

(Image credit: Chips and Cheese)

As Chips and Cheese observes: “Amazingly, RDNA 2s [Big Navi] VRAM latency is about the same as Amperes, even though RDNA 2 is checking two more levels of cache on the way to memory.”

The site points out that AMDs speedy L2 and L3 caches with their very low latency could give Big Navi GPUs the edge in less demanding workloads, and that this might explain why RX 6000 graphics cards do very well at lower resolutions (where the GPU isnt being pushed nearly as hard).

Certainly, thats a tentative conclusion, and we need to be careful around reading too much into any one single test. Furthermore, its unclear how any such memory latency advantage might translate into real-world performance in games anyway.

Chips and Cheese also notes that memory latency with CPUs is far quicker, of course, but that comparatively “GDDR6 latency itself isnt so bad” on an overall level.

Via Toms Hardware