(Image credit: TiN and K|ingp|n of team PURE at Kingpincooling.com)
In the DIY computer world a lot of people are concerned about a video card (GPU) “bottlenecking” on a given CPU, or a given CPU bottlenecking a GPU. In this article I will explain what it is that they are talking about, as well as discussing whether or not it’s worth being worried about.
First off is the answer to the question “What is this bottlenecking you speak of?!”
Here’s a quick tip: The rig in the picture is not bottlenecking! …. Hit Read More to find out!
What IS bottlenecking?
It’s a fairly simple concept, say want a GPU that is capable of rendering 100 frames per second (100fps) in your favorite game. So you go looking, and you find out that a XTS3600 GPU will get the job done (all names in this article are made up). You think you’re all set, but when you come home and plug the GPU into your computer you only get 30fps. What went wrong?
The issue is that you’re using a Semelleron 25 cpu, it’s a single core 1ghz chip. It simply is not capable of keeping the card fed.
Worse, you could plop down $800 for a top of the line dual GPU video card and you still wouldn’t get much more then 30fps.
The issue is that while the GPU renders (draws) the image you see on the screen, it has to be told what to do. It has no idea where the tree is supposed to be, what angle you’re looking at it, what color it is, or even where each polygon is supposed to be. The CPU has to tell the GPU what to do.
If your CPU is ancient, it cannot think fast enough to figure out where that tree is supposed be, and the colors, and the polygons, PLUS of course the CPU also has to work on the physics of the game, the mouse and keyboard input, run the hard drives, think about networking, and everything else.
That, is bottlenecking!
Next up is….
How do I know if I’ll have a problem?
The easiest way of course is to look at what other people are using and see how they’re doing, but failing that here are some basic guidelines:
- The higher the resolution and detail settings, the harder things are on the GPU. This is because there are more things to render in a given frame, and way more pixels that have to be colored in and such in a single frame. The CPU doesn’t care much, it sends roughly the same data to the GPU regardless of resolution and detail settings.
- The lower the resolution the more the CPU has to work, each frame has it’s own stuff that needs to be sent to the GPU. This is often specified as a reason to upgrade your CPU if you’re gaming at lower resolution, but this is not the case in my opinion. If the CPU can cough up data for 100fps at high resolution, it can certainly cough up the data for 100fps at low resolution. Really, the CPU just needs to be able to supply the GPU with enough data at any resolution, and this doesn’t generally take too much.
- Details are hard on the GPU, Anti-Aliasing and Anisotropic filtering are especially brutal, because the same frame has to be rendered multiple times before it can be set to the monitor. They do not, however, make the CPU work very much harder.
Lastly, we have…
Should I worry about this?
And the answer to that is…. No, generally. If your cpu is capable of running the game at any resolution and with any GPU.
Some logical thought is required though, buying a $500 video card and using your $20 CPU to run it is not going to work out very well for you, a $100 GPU would do just as much on that $20 CPU.
Realistically speaking, any modern dual(or more) core CPU operating at 3ghz or more is going to be just fine with any video card at any resolution in pretty much any game. Some games require beefy CPUs for physics and such, for those games you might need more CPU, but it’s not due to bottlenecking the GPU especially.
I think that a rough rule of thumb would be that maintaining a 1:2 ratio between CPU price and GPU price will keep things operating quite happily. If your CPU runs $500, your CPU should be in the $200-300 range to keep it well fed. On the very top end you’ll need to overclock the CPU to keep the GPU happy regardless of what CPU you have, 4ghz is generally fairly easy to attain with a good CPU cooler, and a 4ghz i7 will keep a top end gpu well fed.
(Image credit: Bobnova)
This Zotac GTX480 was at ~70*c below zero, fed by a Core i7 2600K at 5.3ghz. It was still bottlenecked for benching purposes.
A $100 gpu on the other hand doesn’t require nearly as much to keep it fed, a $50-60 CPU should do it fine. At this end of things you have to look around though, as you can find a single core CPU that was released in 2006 or so for $40, and you can find a modern dual core for $60, the $60 CPU is well over twice as powerful, despite only costing 50% more.
Being a rough rule of thumb it’s something that gets you pointed in the proper direction, not a hard rule you have to maintain at all costs.
Probably the best way to find out would be to register for the FunkyKit forums and newsletter, and then posting your question in the FunkyKit forums, I or one of our other experts are always happy to answer questions and help you get things figured out.
(A final footnote: The grand irony is that the rig in the picture on top actually is bottlenecking, but that is due to the nature of benching, when you’re running 3d benchmarks competitively you can never have “enough” CPU power. Not even the dual six core+HT $1700/each CPUs in that rig is enough.)
You can also check out my other article on CPUs, Cores and Threads.