Nvidia RTX 3080 Gaming Performance at 1440p: CPU or Architecture Bottleneck?

It’s time we get to explore something we’ve been eager to investigate since our day-one GeForce RTX 3080 review, and that’s the weaker-than-expected resolution scaling of the new Ampere architecture.

Now before we get into it, do note, this article is not designed to change your opinion (our ours) about Ampere. The RTX 3080 is the best value high-end GPU on the market by a country mile… if you can get your hands on one. But back to Ampere and its interesting quirks, the aim here is to investigate and explain what’s going on.

For those of you not up to speed, in our RTX 3080 review we found that gaming performance at 1440p was not as impressive relative to what was seen at 4K. We saw this as did many other reviewers, some of which simply suggested it was a CPU bottleneck, the RTX 3080 is powerful enough that even the latest and greatest CPUs can’t keep up.

But simply writing this off as a CPU issue didn’t sit right with us. In our review we tested with both the Ryzen 9 3950X and Core i9-10900K and both often saw the same resolution scaling. We noted part of the reason for the weaker than expected 1440p performance was down to the Ampere architecture and the change to the SM configuration. The two times FP32 design can only be fully utilized at 4K and beyond. This is because at 4K the portion of the render time per frame is heavier on FP32 shaders. At lower resolutions like 1440p, the vertice and triangle load is identical to what we see at 4K, but at the higher resolution pixel shaders and compute effect shaders are more intensive, so take longer and therefore can fill the SMs FP32 ALUs better.

We often see high performance GPUs better utilized at higher resolutions for similar reasons, so that in itself isn’t unusual. Higher resolutions always tend to put core-heavy GPUs to work better, but they also minimize other system’s bottlenecks.

In our RTX 3080 review, we could also see how the RTX 2080 Ti extended its lead over the vanilla 2080 at 4K, going from ~23% faster at 1440p to ~28% faster at 4K. That’s only a 5% performance disparity and the scaling at 1440p and 4K looks very similar.

With the RTX 3080 we saw a 21% advantage over the 2080 Ti at 1440p and then a much larger 32% margin at 4K. We suggested this could be explained by the Ampere architecture’s nature which is more compute heavy, datacenter and AI oriented than gaming-first. So with this article we’re going to explore how true that statement is by adding 1080p and 720p data to the 14 games we test with. If we’re seeing CPU performance influence the 1440p results, this will become very apparent with the 1080p data and extremely obvious at 720p.

For this test we’ll be running a Core i9-10900K at stock clocks and comparing the RTX 3080 against the 2080 Ti. Turing isn’t necessarily the best architecture at efficiently scaling lower resolutions, but it does seem to be better than Ampere and considering there’s no other GPU that comes close in terms of performance, it is what has to be used. Let’s get into the results.