This article states the following power consumption:<p>> The 10900K has a 125-watt TDP, for example, while AMD's Ryzen 9 3900X's is just 105-watts.<p>My understanding from other articles (like [1]) is however that Intel had to _massively_ increase the amount of power the CPU consumes when turboing under load:<p>> Not only that, despite the 125 W TDP listed on the box, Intel states that the turbo power recommendation is 250 W – the motherboard manufacturers we’ve spoken to have prepared for 320-350 W from their own testing, in order to maintain that top turbo for as long as possible.<p>Somehow it feels like back in the Pentium 4 days again.<p>[1]: <a href="https://www.anandtech.com/show/15758/intels-10th-gen-comet-lake-desktop" rel="nofollow">https://www.anandtech.com/show/15758/intels-10th-gen-comet-l...</a>
I'm surprised they _still_ don't have a working 10nm desktop chip. They're 5 years behind schedule (and still counting) at this point! This is just a reskinned 9900k with 2 more cores and a much higher (> 25%) TDP at 125-watts, which is a generously low estimate of how much power it will actually suck. I briefly had their "140 watt" 7820x chip (returned for refund) that would gladly suck down more than 200 watts under sustained load. Intel plays such games with their single core turbo at this point that the 5.3ghz means very little, and it's the same tired architecture they've been rehashing since Sandy Bridge (2011).<p>This is an incredibly poor showing and if I were an investor I would be seriously questioning the future of Intel as a company.
As desktop user I don't care about power consumption, I care very little that it has x% more power then last year processors (at least when the x < 100) because current processors have enough power for anything you throw at them, but what I do really care is this:<p>-Do they still pack that PoS Intel management inside the CPU? And are these new CPU still vulnerable to Meltdown? Because if any of those questions are answered with "yes" then no amount of cores, GHz and performance % is going to change my mind from Ryzen
It feels like we are moving towards ARM becoming the primary consumer CPU, and x86/x64 being used to power only niche use cases (dev, audio/video, gaming etc.) or servers only. Can anyone working in the space confirm/deny?
Too bad they have only 2 channel memory controllers and same 32/32 L1 cache. That means all that power is still wasted waiting for memory (Max Memory Bandwidth 45.8 GB/s, seriously?).
Not sure why feeling so excited about those processors.
Does anybody need those 5.3 GHz at 300 watts? I believe the next big thing is "massively parallel distributed processing": thousands or maybe even millions of tiny processing units with their own memory and reasonable set of instructions run in parallel and communicate with each other over some sort of high bandwidth bus. It's like a datacenter on a chip. A bit like GPU, but bigger. I think this will take ML and various number crunching fields to the next level.
This may be just good enough to hold off AMD from further gaining market shares on Desktop. For certain group of user that doesn't need a GPU, the iGPU included is a good enough solution.<p>One thing that bothers me a lot is the 2.5Gbps Ethernet that is supposed to come with those new Intel motherboard ( assuming MB vendors uses it ). Why not 5Gbps Ethernet? How much more expensive is it? It seems we still dont have a clear path on how to move forward from 1Gbps Ethernet. Personally I would have liked 10Gbps, but the price is still insanely expensive.
So how is this possible now and not 5 years ago? Are there new discoveries that let us bring up the clock speed again?<p>(Will they go even higher in the future?)
is this 5.3GHz with, or <i>without</i> spectre, RIDL, meltdown, zombieload, fallout, MDS, TAA, and whatever other mitigations for vulnerabilities inherent in Intel chips?
This is still inferior to AMD's high-end offerings in terms of price/performance and performance/watt, but it does have one edge: single-threaded performance. There are certain applications where that matters a lot. Other than those, I'd pass.
I'm kind of surprised at this point that Intel didn't just bite the bullet and go with TSMC or Samsung for fabrication for a year or two until they figured their in-house sub-14nm story out.
What’s the betting that when Apple move to ARM processors AMD and Intel both start developing their own ARM CPUs. I have a feeling it’s going to be extremely hard for x86 to compete once Apple show the way on this.
So about 400,000X faster than my first computer console, the 8bit 1Mhz Atari 2600, which cost $200 at that, time 40 years ago. Or 600X faster than the 64-bit 80 MHz Cray-1 which was $8mm at the same time.