Finding out whether your FPS are CPU or graphics card limited

Do just take the "unreal is cpu intensive" unmeasured blah-blah statements.

Do this:
  • get framerate in a reproducible way that you like
  • go into the BIOS and downclock your CPU by 1/3rd, repeat benchmark
  • download an overclocking utility for your graphics card and use it to downclock your GPU by 1/3rd, repeat benchmark
  • for added fidelity, do same for:
  • RAM
  • graphic card RAM

I don't have time right now but I'll post mine later.

We need a reference thread about hardware-to-fps anyway.
 
Glad you found some useful info in here :)
Sounds like quite a journey for yourself!
I'm very seriously considering jumping ship to AMD as Ryzen has impressed me with its reported price to performance ratio, and despite still trailing Intel in single-threaded performance by a little, I'm experiencing more and more games making use of more than four threads which will probably only increase as time moves forward, and in general use strong multi-threading is finally starting to break out a gap over strong single-threading, especially when multitasking or doing anything related to video.
Could you name a few games where you think they would use more than 4 threads? If I got one of them I might check it with process explorer and post a Screenshot :)
 
Intel most certainly beats Ryzen when it comes to single core performance, and as of now that means most games will perform better on Intel, but there are other things to consider (besides price).

First of all, in most tests i've seen, CPU usage of Ryzen is always much lower than that of Intel, sometimes as low as 50%. What this means is that if you are the kind of person who likes to do other stuff while gaming (say, you are a streamer, but even if you just like to run background stuff in general), Ryzen may be a better option. Ryzen also performs better in other things besides gaming, so it's more of a multi-purpose CPU, and that may be important to some people. Lastly, multi-treading is slowing becoming more common so single core performance may not matter as much in the future.

I'd say AMD CPUs are very competitive right now. Their video cards, not so much, but Nvidia really shot themselves in the foot with that expensive g-sync crap. That may be the deciding factor for a lot of people who are on a budget.
 
Intel most certainly beats Ryzen when it comes to single core performance, and as of now that means most games will perform better on Intel, but there are other things to consider (besides price).

First of all, in most tests i've seen, CPU usage of Ryzen is always much lower than that of Intel, sometimes as low as 50%. What this means is that if you are the kind of person who likes to do other stuff while gaming (say, you are a streamer, but even if you just like to run background stuff in general), Ryzen may be a better option. Ryzen also performs better in other things besides gaming, so it's more of a multi-purpose CPU, and that may be important to some people. Lastly, multi-treading is slowing becoming more common so single core performance may not matter as much in the future.

I'd say AMD CPUs are very competitive right now. Their video cards, not so much, but Nvidia really shot themselves in the foot with that expensive g-sync crap. That may be the deciding factor for a lot of people who are on a budget.
While I agree, you also have to consider that an earlier single thread limit also means lower overall CPU usage!
So while it looks like you got 50% CPU load spare, Windows loads each core for a split second to 100%.
Yes, that still means that each core has spare load for other things but it also means that maybe an i7 with the same fps numbers might show the same, lower load!
 
I can't run more than around 6, maybe at times 8 AI in ACC without maxing out my CPU, and I'm also on 2500k @ 4.4 (well, currently 4.5 in a desperate attempt to gain at least something). Not experiencing any judder, though, the game runs quite smooth. I'm not using vsync, though, and I have the framerate limited to 60 fps in Afterburner and to I believe 65 in game (not sure if the in-game limit is doing something when set like this, but I just feel better ;) ).

I have however experienced some serious judder when I was testing the game in Windows 7. If I was very close to maxing out the CPU, every time it maxed out, even if for a brief moment, the game pretty much froze for an instant. I didn't want to spend more time investigating as I don't plan to play the game on Win7 and I don't really have these issues with other titles there, but it was interesting to see. Though I must also say I was using slightly older GPU driver for my 970, like two or three months old (I'm up to date on Win10).

I also have considered doing some side-grade to an i7 my MB supports as I've been running into CPU limitations quite a lot lately with my streaming attempts (somehow the games keep getting more and more hungry), but even that seems unlikely to happen currently :cry:
 
Citation really needed here, because that's against any thing i've read for the past years, and also completely contradicts my own tests when i was OC-ing my system. Besides few special cases ram speed was never a factor for me XMP or not. If you have a link to a memory benchmarks that can show significant performance changes with ram speed please share.

I think it's a very case by case thing with Intel, I've had a few situations where after my mobo decided to randomly reset settings (a common thing in the past few years...) and I've forgotten to apply XMP, my framerates have been hit to various degrees.

It's definitely a larger impact on Ryzen though due to the intricacies of its architecture, so maybe it's going to become more of a thing moving forward. GPU and CPU and RAM size will always be 1,2 and 3 for importance, not arguing against that but after that memory speed should not be forgotten!

Not really, a normal cpu bottleneck will just slow the fps down, not cause judder. If you run steady 60 and see problems it's down to drivers, background services or other crap.
I have pretty clean system and stress testing CPU in ACC brought me down to 45-55fps in places, all pretty smooth (on 144hz screen) except for input latency.

I can't run more than around 6, maybe at times 8 AI in ACC without maxing out my CPU, and I'm also on 2500k @ 4.4 (well, currently 4.5 in a desperate attempt to gain at least something). Not experiencing any judder, though, the game runs quite smooth. I'm not using vsync, though, and I have the framerate limited to 60 fps in Afterburner and to I believe 65 in game (not sure if the in-game limit is doing something when set like this, but I just feel better ;) ).

I have however experienced some serious judder when I was testing the game in Windows 7. If I was very close to maxing out the CPU, every time it maxed out, even if for a brief moment, the game pretty much froze for an instant. I didn't want to spend more time investigating as I don't plan to play the game on Win7 and I don't really have these issues with other titles there, but it was interesting to see. Though I must also say I was using slightly older GPU driver for my 970, like two or three months old (I'm up to date on Win10).

I also have considered doing some side-grade to an i7 my MB supports as I've been running into CPU limitations quite a lot lately with my streaming attempts (somehow the games keep getting more and more hungry), but even that seems unlikely to happen currently :cry:

See that's what I've always thought and experienced too, but in ACC's case this judder manifests itself as I add more and more AI into races, which linearly scales up CPU utilisation as has already been established in this thread. Unless the framerate is wavering that quickly that fps counter readings literally can't keep up, I'm stumped. There's nothing running in the background that could interfere, and I've killed all my typical background tasks while running tests with zero difference.

I'm exploring one last option which involves a combination of forcing my monitor to 60 Hz instead of sitting at the default 59.95Hz by using a custom resolution, using RivaTuner's framerate limiter at 60 and forcing Vsync via the Nvidia control panel while turning it off in-game. Yet to try it with ACC but mucking around in F1 2018s benchmark tool, another game that's given me similar issues in the past, shows some encouraging results.

Glad you found some useful info in here :)

Sounds like quite a journey for yourself!


Could you name a few games where you think they would use more than 4 threads? If I got one of them I might check it with process explorer and post a Screenshot :)

Haha well in recent times I'm more than positive Battlefield 1 (and 5's beta), any of the Forza games/demos, F1 2018 and pretty much any open world game released in 2018 that I've played have all seemingly asked for more than 4 threads. Feel free to test of any of those out if you can. In the sim sphere, I'm actually kinda curious how RF2, iRacing and R3E compare thread-wise to AC and PC2 and the ACC results you've already posted. I already know AMS is massively single threaded. :)
 
Haha well in recent times I'm more than positive Battlefield 1 (and 5's beta), any of the Forza games/demos, F1 2018 and pretty much any open world game released in 2018 that I've played have all seemingly asked for more than 4 threads. Feel free to test of any of those out if you can. In the sim sphere, I'm actually kinda curious how RF2, iRacing and R3E compare thread-wise to AC and PC2 and the ACC results you've already posted. I already know AMS is massively single threaded. :)
I don't have pCars2 or Forza but knock yourself out :p
One notice: The Screenshot for BF1 is old, don't have it installed currently and I deactivated HT for testing. So for comparison you need to devide all the load-values by 2!
For me "real game threads" are the ones that really stick out. These are the ones you'd want the same amount of cores for. All the others can basically be bunched up on one spare core.
All the Screenshots are done with a GPU load below 70% to definitely hit the CPU limit. I find it very interesting that the real multicore games like F1 2018 and Assassin's Creed: Origins don't go over 11% on any thread but it's still a limit.
Memory bandwidth? Mainboard? Old CPU architecture? No clue...

I put in how many cores (or at least CPU threads) in my opinion you'd want for each game :)

Anyway, here they are:
Battlefield 1: 5-6
upload_2018-9-20_12-56-49.png


Assassin's Creed: Origins: Hello Denuvo... 9-10
upload_2018-9-20_12-57-11.png


Assetto Corsa:
3
upload_2018-9-20_12-57-40.png


Raceroom:
4
upload_2018-9-20_12-57-48.png


F1 2018:
8-10
upload_2018-9-20_12-58-6.png


rFactor 2:
2-3
upload_2018-9-20_12-58-20.png
 
Last edited:
@Justin So, inspired a bit by your suggestion of RAM importance, i did some (small) tests for ACC. Turned out that on my system ( 4670k at4.2ghz, gtx1070, 2x8gig of gskill 2133 DDR3) dropping to 1600mhz costed me 5% of fps in ACC when cpu limited.
Didn't touch the timings because from experience of overclocking ram, it's frequency and timings are interchangeable to some extent.
Don't think going up from 2133 would be much different so i'd expect 5% improvement at best from high performance ram at this point (that's still DDR3).

So for anybody out there that has not enabled XMP yet, it will definitely help, otherwise $$$ will probably be better spent on CPU itself.
 
Last edited:
I don't have pCars2 or Forza but knock yourself out :p
One notice: The Screenshot for BF1 is old, don't have it installed currently and I deactivated HT for testing. So for comparison you need to devide all the load-values by 2!
For me "real game threads" are the ones that really stick out. These are the ones you'd want the same amount of cores for. All the others can basically be bunched up on one spare core.
All the Screenshots are done with a GPU load below 70% to definitely hit the CPU limit. I find it very interesting that the real multicore games like F1 2018 and Assassin's Creed: Origins don't go over 11% on any thread but it's still a limit.
Memory bandwidth? Mainboard? Old CPU architecture? No clue...

I put in how many cores (or at least CPU threads) in my opinion you'd want for each game :)

Anyway, here they are:
Battlefield 1: 5-6
View attachment 270329

Assassin's Creed: Origins: Hello Denuvo... 9-10
View attachment 270330

Assetto Corsa:
3
View attachment 270331

Raceroom:
4
View attachment 270332

F1 2018:
8-10
View attachment 270333

rFactor 2:
2-3
View attachment 270334

This is a very helpful post, thanks for all the effort! :)

__

Anyway, the frame timing fix I described in my last post has indeed improved things a little, dry conditions during the day with enough AI cars to not hit the CPU limit indeed looks smoother and has more consistent frametime readings, which is probably going to be a help in any game I run from now on that may exhibit judder and microstutter. It also gave me a further confirmation that I can only really run 10-12 cars max in races before my CPU gives up.

However the more I test ACC, the more I feel that I'd be a bit rash upgrading my rig based on its current performance alone. It seems running any sort decently sized AI race in conditions apart from during the day and dry are in need work optimisation wise, less so rain conditions but night racing is incredibly taxing at this stage.

I've started to test the GPU utilisation of my 1070 by running full grid races for a lap, then using replay mode in cockpit view which takes a load off the CPU and eliminates my bottleneck. Here full grid AI races in day/dry get me a steady 60fps at 1440p with mostly high settings with no stutter than I experience when actually driving. Doing the same test in the rain forces me to run a few settings at mid but nothing too drastic.

Once the lights need to be used though, things go downhill fast. Lowering my effective render resolution to 1080p and with mid settings only just manages to hold up for me (yeah I understand night conditions in racing games need more grunt, but this is a kinda extreme case haha), and when adding rain to the night mix, it is currently impossible to stay above any sort of reasonable framerate, with my GPU failing to hit 70% utilisation despite maxing out in every other test. I'm yet to properly confirm if this was just a bug or is reproducible (if anyone wants to try this test, 20 cars at night in heavy rain in cockpit view replay mode, feel free!), but it leads me to believe that even though UE4 is a well used render engine, plenty of optimisation will surely still be taking place over the course of EA, probably most of it coming nearer towards 1.0.

Some people would say this is obvious, but there's also a fear that optimising the visual performance is in Epic's hands and not Kunos, which I now reckon isn't the case. It's a 0.1 release and should be treated that way and not put on the same scale to the fully released UE4 games out there. If anything, the slow but gradual performance improvement of PUBG says that even the worst optimised UE4 games (and it was one of them in early access, yikes) can be improved to acceptable levels, which combined with some possible physics optimisations to help the CPU side could mean that us on older or less powerful rigs might be able to push this game more than we might now. :)

It might also explain why the current special events are focusing on driving in the dry and day, with the hotstint event the only one progressing to night conditions, which are fine performance wise until you add a bunch of AI cars into the mix, whereas the AI races in the day run fine apart from my CPU being unable to handle a full grid at this stage.

Shame this whole thing has given me an upgrade itch that could bug me for a little while, but that's my fault, not ACCs. :D
 
Exactly, anyone claiming the low-ish performance is because of UE4 simply ignores (or forgot) plenty of UE4 games that perform absolutely fine. It's not the engine, it's what and how you use it that matters most.

Though I guess UE4 has this reputation of bad performance in part also due to the fact that it's often used for some of the more ambitious games that can have performance issues for that reason alone, or used by developers that are not skilled enough yet and have chosen the engine because it is accessible and fairly easy to work with. It's likely a similar situation to where people often scoff at the Unity engine, which is often used for the most terrible shovelware products you can imagine and therefore has the reputation of being really bad by association, despite actually being a pretty advanced and capable engine.
 
  • Deleted member 197115

Exactly, anyone claiming the low-ish performance is because of UE4 simply ignores (or forgot) plenty of UE4 games that perform absolutely fine. It's not the engine, it's what and how you use it that matters most.
.
You mean very few. :)
 
That drop at sub 50 FPS at night:


Early access or not, it seems clear a mid range PC is insufficient to run this at a steady 60 FPS at max settings. Bad news for those of us on a budget. I think i'll wait to see what AMD does next year. The Ryzen 3000 and their 7mm GPUs ought to come out by then, hopefully as early as possible though i have a feeling we'll probably have to wait for late 2019.
 
It looks like the 1060 is permanently at over 95% load so it might be just the graphics card limiting.
And that would totally be expected. 2,5 years old mid range card for a 2019 title that promises modern high end graphics.
 
Yeah. The issue is that AMD GPUs aren't that great, and i gotta go AMD because i have a freesync monitor and i'm done with vsync. I hate it with a passion and can't wait to get rid of it. The RX 580 is more or less the same as a GTX 1060 and the Vega just isn't worth it for the money. AMD better come up with something good for their 7mm cards come 2019.
 
It looks like the 1060 is permanently at over 95% load so it might be just the graphics card limiting.
And that would totally be expected. 2,5 years old mid range card for a 2019 title that promises modern high end graphics.
It's still a current gen graphics card, though. It's kind of risky releasing a game that's built for future hardware...
 
  • Deleted member 197115

It looks like the 1060 is permanently at over 95% load so it might be just the graphics card limiting.
And that would totally be expected. 2,5 years old mid range card for a 2019 title that promises modern high end graphics.
The situation is no better with 1080Ti on 4k.
We'll see if we see any significant improvement in October when Kunos plan VR support.

FRAPS works for FPS monitoring.
 
It's still a current gen graphics card, though. It's kind of risky releasing a game that's built for future hardware...
That's what every graphically big game does though. Witcher 3, assassin's creed, battlefield. The highest setting are meant for future hardware, the mid-high for current hardware and lowest settings mostly look like full on garbage but are running on quite aged hardware.
Also it's early access so they have to plan it for the real release, which will be when the future hardware is out a few months.
 

Latest News

Are you buying car setups?

  • Yes

  • No


Results are only viewable after voting.
Back
Top