Intel 12th-Gen CPUs

  • Thread starter Deleted member 197115
  • Start date
7800x3d out in January 2023 is the thoughts over the ETA. Really torn whether to wait or just shoot myself in the head
Darn, I didn't think it'd be that early. If that's the case, then I'd definitely wait. That's only like 3 months past the non-3D chips.
Be sure to tell us all about it!

4th qtr 2022, right ?
Crystal should be very late September to October. 12K is Q4 (2022). Hopefully no delays. The specs on both look absolutely incredible. Spec-wise, Pimax will be the leader in all areas: h.FOV, v.FOV, total resolution, pixel density (PPD), refresh rate, and more.
 
Last edited:
Said to be three x3d versions ie all announced CPU’s so far to be released. Mad to not wait if true but also expected to be as hot or hotter than intel. Power consumption unknown
 
Last edited:
Yes all the rumours are saying that Ryzen 3D V cashe models are likely to be announced at CES in January for a Q1 launch in 2023. Zen 4 launch in September is to beat Intel to the first punch. I’m sure Intel will win at gaming (due to the better latency of a single die design that they have always had) the 3d v cashe models will swing in later.


The 5800x3D by contrast was released late because it was new technology and they had some capacity from the chipsets made for Epyc 3D server CPUs to show they could match the 12900k.

I don’t believe it makes any sense to delay it very long for Zen 4 as it solves one of the major flaws with the chiplet IO die design and give substantial advantages offer intel’s design. Especially when they scale it up to 32 and 64 core workstation cpus later on.

The issue is simply timing, as it’s harder to make the stacked chipsets and have enough stock for a cpu launch.
If they waited they would be behind Intel for months which is bad for business. Could still not happen like you say but that’s my 2 pence worth.
 
Last edited:
Potentially the changes AMD has made to the front end throughput, to using DDR5 and to increasing the cache may mitigate some of the costs they had before with the chiplet design and the extra cache model might not see as much benefit in games as we saw in the 5000 generation. All the early Zen chips were really bad for memory latency but it might be somewhat improved with this generation or at least mitigated by a big increase in cache size and bandwidth. Its hard to know if the 3d cache CPU will be necessary, Intel might not be as competitive this generation or it might be overly so in games and it once again boils down to latency.

I believe despite claims to the contrary there will be stock issues in the beginning so its going to be tricky to either buy the CPU as its released or wait and see what Intel's 13th gen looks like. I kind of want to extend my DDR4 32GB sticks if possible but really depends on the hit on performance seen and how the 13th gen Intel even performs, I am not keen on the Big/small CPU design.
 
Crystal should be very late September to October. 12K is Q4 (2022). Hopefully no delays.
I'm not sure I would put odds on that.

If history is any indication, Pimax has yet to launch a product on time and their delays have tended to be on the order of 6 - 18 months.

Typically competing products that were not announced until after a Pimax product was supposed to ship end up being released before the Pimax products get into the hands of customers.

But maybe this time will be different....
 
Last edited:
I haven't installed the CPU cover bracket yet. I did install the August 16 motherboard firmware update and like before I tested it at 4,000 MHz only to have it blue screen on me after about 10 minutes of play time. So I'm back to 3800 MHz which runs reliably on Gear 1.

Is there any chance that better CPU cooling and this bracket could have any impact on this?

Still waiting for Seasonic to officially launch their ATX 3.0 PS's so I can retire the 12 year old Corsair 1200W Gold PS I have now.

I'm still thinking I'll wait off on any additional GPU or CPU/MB updates once they are released until Valve has announced something.
 
Last edited:
Is there any chance that better CPU cooling and this bracket could have any impact on this?
I'd bet on "no", since it failed during play time (when thermal throttling seems highly unlikely to be have been occurring) rather than a stress test. But you can of course keep an eye on temps during play to see how high they go...
I still can't get my head around how faily it was of Intel to get that simple mechanical issue so badly wrong.
 
  • Deleted member 197115

I bet on insufficient memory voltage, was in the same shoes myself before finding the right settings that are rock stable at 4000Mhz, vanilla XMP was not enough.
 
  • Deleted member 197115

Still on Z390, so it's Gear 1 I guess.
 
I haven't installed the CPU cover bracket yet. I did install the August 16 motherboard firmware update and like before I tested it at 4,000 MHz only to have it blue screen on me after about 10 minutes of play time. So I'm back to 3800 MHz which runs reliably on Gear 1.

Is there any chance that better CPU cooling and this bracket could have any impact on this?

Still waiting for Seasonic to officially launch their ATX 3.0 PS's so I can retire the 12 year old Corsair 1200W Gold PS I have now.

I'm still thinking I'll wait off on any additional GPU or CPU/MB updates once they are released until Valve has announced something.
What CPU, motherboard, and RAM? And what setting, speeds, voltages, etc.... The IMC on any 12900K should be able to handle 4000 MHz, gear 1.
 
I don't think the issue is with the MB. The problem is that I'm using 4 banks of 8Gb ram instead of 2 banks of 16Gb.

If I pull the 2nd pair out and only use 2 x 8Gb banks it works reliably, but DCS really needs 32Gb to run well.

Of course 13th gen is coming soon and DDR5-6400 is now available in 16Gb banks, so... I may hand this one down to my daughter soon.
 
Last edited:
I don't think the issue is with the MB. The problem is that I'm using 4 banks of 8Gb ram instead of 2 banks of 16Gb.

If I pull the 2nd pair out and only use 2 x 8Gb banks it works reliably, but DCS really needs 32Gb to run well.

Of course 13th gen is coming soon and DDR5-6400 is now available in 16Gb banks, so... I may hand this one down to my daughter soon.
Are you running a 12900K? I don't know why I assumed that in my last message (probably got mixed up with other posts & articles I was reading at the time).

I'd go 2x 16GB DDR4 with Samsung B-die chips. They'll run just as fast, if not faster, than most DDR5 kits and cost much less. Make sure they're dual-rank sticks (just about any 16 GB stick using Sammy b-die will be DR). DR sticks are much faster than single rank...unless running 4 SR sticks in a motherboard that uses T-Topology then that's basically the same as 2x DR sticks but almost no Z690 boards use T-Topology.

On my Z690 12900KS, I got a DDR4 Samsung B-Die 3600 MHz 16-16-16-36 kit running at 4266 MHz 16-16-16-32 with almost all the subtimings (secondaries, tertiaries, etc.) tuned. I can also run it at 4000 MHz 14-14-14-28 (with tuned subtimings). It destroys almost all DDR5 kits. You need DDR5 at like 7000 MHz with tuned subtimings to get the same performance. Even then, DDR5 at that performance to match those DDR4 sticks will probably cost 2x - 3x more than the DDR4 kit.

If using a CPU with huge cache, like the AMD 3D chips, then RAM speed barely makes a difference but if using an Intel 12th or 13th gen, I'd personally get a Z690 DDR4 board, some killer 2x 16GB Samsung b-die RAM, OC / tune the RAM, and then slap a 13700K (or higher) on it when they come out.
 
Last edited:
The next AMD MB's will only use DDR5 and with Intel's 13th gen will likely want it as well.

DDR5 will be coming into it's own and it will start to outperform DDR4 memory and the prices are starting to drop and according to Gamers Nexus they are expected to continue falling quickly as more new MB's support them. I understand part of this is better support for the memory and part of this is increasing speeds.

DDR5 is supposed to come into it's own when transferring large chunks of data. I don't know all the ins and outs of it, but as I understand it the reason people are talking about needing to double the speed of DDR5 is for access speed because of Gear 2. So if you have it up to twice the max Gear 1 speed of DDR4 than you can get equivalent access speeds. However you should be able to move gobs more data.

Someone can correct me if I'm wrong, but this sounds like the difference between a hard drive's seek time and it's transfer rate such that if you are moving very large blocks of sequential data the transfer rate becomes more important than the seek time.
 
Last edited:
The next AMD MB's will only use DDR5 and with Intel's 13th gen will likely want it as well.

DDR5 will be coming into it's own and it will start to outperform DDR4 memory and the prices are starting to drop and according to Gamers Nexus they are expected to continue falling quickly as more new MB's support them. I understand part of this is better support for the memory and part of this is increasing speeds.

DDR5 is supposed to come into it's own when transferring large chunks of data. I don't know all the ins and outs of it, but as I understand it the reason people are talking about needing to double the speed of DDR5 is for access speed because of Gear 2. So if you have it up to twice the max Gear 1 speed of DDR4 than you can get equivalent access speeds. However you should be able to move gobs more data.

Someone can correct me if I'm wrong, but this sounds like the difference between a hard drive's seek time and it's transfer rate such that if you are moving very large blocks of sequential data the transfer rate becomes more important than the seek time.
It's basically throughput VS latency. It's obviously complex but that's the super-basic version of it. I suggest checking out Buildzoids youtube channel, Actual Hardcore Overclocking, if you're interested in the nitty gritty of it all. He has tons of extremely detailed videos about RAM, RAM timings, etc,

With regards to DDR4 VS DDR5, I think Frame Chasers has the best content on the subject has he does proper, apples-to-apples comparisons with overclocked and sub-timing-tuned RAM from both sides.
12900K cherry picked for a good DDR4 IMC VS a 12900K cherry-picked for a good DDR5 IMC. He's running 2x 16GB 4133 Mhz CL15 DDR4 VS 2x 16GB 7000 MHz CL32 DDR5...both with tuned sub-timings.

P.S. AMD's 5800X3D has barely any improved gaming performance going with top-performance RAM as the huge cache is taking care of it all. I'm guessing it'll be the same with AMD's upcoming 7000x3D chips.
 

Latest News

Are you buying car setups?

  • Yes

  • No


Results are only viewable after voting.
Back
Top