Intel 12th-Gen CPUs

  • Thread starter Deleted member 197115
  • Start date
Another thing to consider is the protection DDR5's EEC provides against remote bit flipping attacks such as Rowhammer vs. DDR4. TL:DR is that its better but still a problem.


 
Last edited:
  • Deleted member 197115

Already had i9-12900K ordered, but checking the situation with DDR5 which MB I am planning to get requires, cancelled. Will wait until the situation stabilizes, on screen i9-9900K is plentiful anyway, and I've turned VR page for now.
 
Some real world numbers for i9-9900K to i9-12900K gain in ACC.
Nice gain but not super crazy impressive for the price of admission. :(
Honestly I'm impressed that a system with such high GPU usage got even that much of an FPS increase. Maybe the GPU was fibbing about its usage of course (I seem to recall Rasmus saying that when the GPU downclocks, it can report ~100% usage at that frequency, which is kinda dumb really). A better test case will be people who are running much lower GPU usage, i.e. well into CPU-bound territory.
Another thing to consider is the protection DDR5's EEC provides against remote bit flipping attacks such as Rowhammer vs. DDR4. TL:DR is that its better but still a problem.
Interesting articles. The second one (from 2018, though it's not easy to tell) only deals with "normal" (pre-DDR5) ECC though, rather than the new on-die sorta-ECC that DDR5 comes with.

It sounds like the new flavour of on-die ECC is solely to allow DDR5 to push the limits even further though (e.g. higher bit densities), which seems like another way of declaring that the bits are now so fragile that they can't be relied upon to keep their state even without a rowhammer attack :unsure:
 
  • Deleted member 197115

Honestly I'm impressed that a system with such high GPU usage got even that much of an FPS increase. Maybe the GPU was fibbing about its usage of course (I seem to recall Rasmus saying that when the GPU downclocks, it can report ~100% usage at that frequency, which is kinda dumb really). A better test case will be people who are running much lower GPU usage, i.e. well into CPU-bound territory.
If you do not cap fps, GPU will be at around 100% unless system is CPU bound. Better CPU will decrease CPU rendering portion from frame time thus allowing GPU push more frames per second at the same load.
But 5 fps is really pathetic for all the troubles.
 
Last edited by a moderator:
"Spa - Quick Race - Mid Day - 42 cars - start replay from the run up to start/finish
i9-9900K - starts at 92FPS (98% GPU 20% CPU with proc explorer showing 1 core pegged), drops to 77FPS to 82FPS (75 to 80% GPU) through the start finish straight until the end
i9-12900K - starts at 105FPS (98% GPU 15% CPU with proc explorer showing 1 core pegged), drops to 90FPS to 95FPS (90 to 95% GPU) through the start finish straight until the end"


Well, I think the real gain is 77 to 90 fps.
Which is a gain of 17%. Not that superb, but it's something.

At 98% gpu load, I'm pretty sure that the gpu frequency didn't stay the same... Or there's something else happening regarding the gpu "limit".
Maybe resizable bar was activated or something else during the transitioning.

"one core pegged" indicates that the core clocks were fluctuating and not all cores were locked to the same frequency.
When I put my 10600k to default, I'll see one core at full load.
When I lock all cores to the same speed, I'll see low loads across all cores.
So maybe the windows scheduler, e-cores and default vs "overclocking" plays a role in it...
 
If you do not cap fps, GPU will be at around 100% unless system is CPU bound.
Well yeah that was exactly what I was getting at - the high GPU load would suggest that the system was probably not CPU-bound, and thus a CPU speed improvement shouldn't have made as much difference as it did. Bit odd really.
At 98% gpu load, I'm pretty sure that the gpu frequency didn't stay the same... Or there's something else happening regarding the gpu "limit".
Ah...
"one core pegged" indicates that the core clocks were fluctuating and not all cores were locked to the same frequency.
When I put my 10600k to default, I'll see one core at full load.
When I lock all cores to the same speed, I'll see low loads across all cores.
Good point. He mentioned using "proc explorer" so I had internally translated what he said to "one thread maxed out" (when viewing the threads of the ACC process), because the way I see it, Process Explorer is at its most useful (in this scenario) for showing you how many threads are maxed out. If it was truly being used to just watch the CPU core utilisations, then Task Manager would have done just as well (and just as misleadingly).... Or maybe I'm having a senior moment here! :roflmao:

Above all else, this particular comparison may be semi-meaningless anyway, since it used a replay rather than real gameplay, and Kunos have warned against drawing conclusions from this approach. (I guess there's no internal benchmark in ACC or people would just use that.)
 
  • Deleted member 197115

"Spa - Quick Race - Mid Day - 42 cars - start replay from the run up to start/finish
i9-9900K - starts at 92FPS (98% GPU 20% CPU with proc explorer showing 1 core pegged), drops to 77FPS to 82FPS (75 to 80% GPU) through the start finish straight until the end
i9-12900K - starts at 105FPS (98% GPU 15% CPU with proc explorer showing 1 core pegged), drops to 90FPS to 95FPS (90 to 95% GPU) through the start finish straight until the end"


Well, I think the real gain is 77 to 90 fps.
Which is a gain of 17%. Not that superb, but it's something.

At 98% gpu load, I'm pretty sure that the gpu frequency didn't stay the same... Or there's something else happening regarding the gpu "limit".
Maybe resizable bar was activated or something else during the transitioning.

"one core pegged" indicates that the core clocks were fluctuating and not all cores were locked to the same frequency.
When I put my 10600k to default, I'll see one core at full load.
When I lock all cores to the same speed, I'll see low loads across all cores.
So maybe the windows scheduler, e-cores and default vs "overclocking" plays a role in it...
Spa - Quick Race - Sunset - 42 cars - start replay from the run up to start/finish
i9-9900K - starts at 75 to 80FPS (98% GPU 20% CPU with proc explorer showing 1 core pegged) and stays the same through the end
i9-12900K - starts at 85FPS FPS (98% GPU 15% CPU with proc explorer showing 1 core pegged), and stays roughly the same through the end

Summary: Averages 5ish FPS gain. (ugh)
 
Good point. He mentioned using "proc explorer" so I had internally translated what he said to "one thread maxed out" (when viewing the threads of the ACC process), because the way I see it, Process Explorer is at its most useful (in this scenario) for showing you how many threads are maxed out. If it was truly being used to just watch the CPU core utilisations, then Task Manager would have done just as well (and just as misleadingly)..

I'm thinking the same!

The second part with the real quick race, no replay that Andrew posted shows only 5 fps improvement. That's definitely hitting the gpu limit there...

Would be interesting the see the same comparison with 50% gpu load!
 
Umm, no that's still a replay, surely? Agreed that it's more deeply into GPU-bound territory though.
Oops, yeah I should read more thoroughly, sorry!

Well cpu comparison with CPUs that definitely have enough cores overall can be done via replays.
If each core has 20% more performance, you'll see 20% higher fps no matter if the games uses 1 thread or 3 threads.

But yeah, definitely hitting gpu limit there... Should put the resolution scale to minimum and re-run his replays!
 
  • Deleted member 197115

Umm, no that's still a replay, surely? Agreed that it's more deeply into GPU-bound territory though.
Rendering is where both AC and ACC become CPU bound all the times due to single rendering thread. Player's and opponents physics calculation are spread across multiple threads/cores so it's rarely the issue.
You can do live race and replay profiling, the results will be similar with more fluctuation in live due to slight variations for each timed run.
 
This case moves some air.
It is using the 4000MHz default memory timings for the moment.
Just happy that everything came up properly.
Haven't started playing with memory settings or Win 10 cache settings.

It appears to be a bit faster at some things.

I was running VR @ 144fps with 2.0x supersampling in some titles and it didn't skip a beat.
The 2080Ti appears to be running easier not harder.

I used the Stream developer moving latency graph to see how it was doing.

DR 2.0 with Ultra settings and no reprojection was staying safely within the green at 90fps with a comfort margin of about 2.5-3ms. At 120 fps it was in the yellow and going over about 2ms. The timing felt more natural. I felt like I was sliding around hairpins under better control. I have No idea how much of that is placebo because I was excited about a new toy, so take those observations with a grain of salt.

In iRacing the CPU is behaving differently. I have two cores just running flat out at nearly 100% all the time now and another at about 60%. The 2080Ti is only at about 50-60% at 90fps. At 120fps the 2080Ti is running closer to 90% and the steam latency graph was staying safely in the green at 120fps for complex tracks.

LeftFront_7064.jpg
 
I did need to set an environment variable to get In Death to work, but I haven't had to disable the ecores for anything I play. the In Death fix was introduced with the 11th gen Intel chips.
 
  • Deleted member 197115

Do you use FpsVr, it can record stats to compare frame time before and after to see gain more accurately as fps is fixed in VR.
 
Do you use FpsVr, it can record stats to compare frame time before and after to see gain more accurately as fps is fixed in VR.
I haven't yet. I just looked at that utility. For about $3 I'll pick it up and see how it works.

What I'm seeing is the latency vs. frame rate. The frame rate is the bar that can not be crossed if you have a stable speed and then it shows how much room there is to spare under that line.

Basically I'm seeing the green graphs like FpsVr shows, but don't have the other statistics showing.

For example at 90 fps you have 11.1ms between frames.
The frame latency that I'm seeing in Dirt Rally 2.0 is about 8.5ms meaning that it has 2.6 ms to spare.

At 120 fps you need 8.3ms between frames without reprojection. DR 2.0 is about 11ms at 120fps meaning it is lagging by 2.7ms and the lines go yellow, but it is compensating. When it goes red, there are dropped frames.

In Death at 120fps is about 6.5-7ms, so it fits well under the 8.3ms frame time.

Meanwhile in iRacing at 120fps I'm seeing around 10+ ms and it is nearly entirely green with a odd yellow line rarely.
 
Last edited:
I played a regular VR game at 144 fps with 2.0 Super Sampling and FpsVR kept it in the green the whole time.

After 20 minutes of game play the CPU was staying around 40C and the GPU was staying around 60C

144fps is a 6.94ms frame rate and it was averaging about 6.0ms. Not bad :)

I'll do some harder racing Sim testing when I get setup to test out some effects.

DCS is supposed to be getting solid multi-core support, or may have already. I'll be very curious to see how that goes.
 
Last edited:

Latest News

What's needed for simracing in 2024?

  • More games, period

  • Better graphics/visuals

  • Advanced physics and handling

  • More cars and tracks

  • AI improvements

  • AI engineering

  • Cross-platform play

  • New game Modes

  • Other, post your idea


Results are only viewable after voting.
Back
Top