I'm excited for what this will do to the cost of dedicated servers in ~1 year.<p>Also, as a person who used to work at Intel, I don't know whose idea this was, but that person should probably have a long hard look at themselves -- hardware people are exactly the people that this kind of shit wouldn't fly with, because they'll almost always ask for details and can spot a hack from a mile away.<p>On the one hand I can sympathize with Intel -- seeing how tough it was to stay on the market year over year, trying to predict and start developing the next trend in hardware. But on the other hand... Why in the world would you do this -- Intel basically dominates the high end market right now, just take your time and make a properly better thing.
I get that Intel feels threatened by AMD. They are trying to impress the consumers... but bullshitting a demo is a very bad move! When a consumer decides to build a new PC, the characteristics of the product matter, but so does the reputation of the company that manufactures it. Right now Intel is putting too much effort into sketchy marketing practices: it undermines the actual work being done on their processors by some very talented people.<p>Presenting it as an extreme overclocking demo would have been a much wiser option.
I just recently replaced my old i7 920 in the homeserver with an AMD Ryzen 5 2600. Really like it so far. Price / performance is great. This is my first AMD since probably ever....<p>The two things I don't like is that their CPUs are pin based. It seemed kind of old fashioned after Intel CPUs. But this is really a minor thing. The other issue is memory compatibility is a bit finicky. Maybe it has to do with the CPU being so new. Not sure.
As an outsider to 'enterprise-grade' computing, I'm curious about situations where a high number of cores in a single processor would be superior to multiple processors with the same total energy draw sitting on a single motherboard?<p>I can understand HPC applications where the high-speed interconnect on the chip would make a big difference.<p>But in business applications where the cores are dedicated to running independent VMs, or are handling independent client requests, what is really gained? There would still be some benefits from a shared cache, but how large quantitatively would that be?
Which one of these companies does at better job with free/libre software? I've always had a soft spot for AMD because it's the underdog, but I want to make sure that they are free, too.
AMD did a great job with Threadripper, making high end CPUs much more affordable. It's interesting that Intel doesn't lower their prices. What's the logic behind it?
For a long time I saved a copy of a publication by Motorola about how Intel played fast and loose with benchmarks in comparisons of the 80386 with the 68020. (I lost it in a move, alas.) Can't say I was surprised to read about the 28-core fiasco.
There was an interview with an Intel engineer on this, it was quite revealing: <a href="https://www.youtube.com/watch?v=ozcEel1rNKM" rel="nofollow">https://www.youtube.com/watch?v=ozcEel1rNKM</a>
This is a short term loss for Intel, but could end up being a long term win as an attack on AMD. Making this announcement forced AMD to advance their plans for the
32-core, possibly faster than they really wanted to right now. That depletes their product pipeline faster, making it more difficult to keep pace with future advances.<p>Edit: initial reports said that AMD was only planning to announce the 24-core CPU, and may have advanced the announcement of the 32-core chip due to Intel’a stunt. TFA doesn’t mention that, so possibly the initial reports were not accurate.
I think of AMD's current approach - a microarchitecture with slower cores, but more cores, than Intel - as very similar to what Sun/Oracle tried to do from 2005 to 2010 with the Niagara family (UltraSPARC T1-T3).<p>Each core in those chips was seriously underclocked compared to a Xeon of similar vintage and price point (1-1.67 GHz; compared to 1.6 GHz to 3 or more), and lacked features like out-of-order execution and big caches that are almost minimum requirements for a modern server CPU. Sun hoped to make up for the slow cores in server applications with having more cores and having multiple threads per core (though with a simpler technology than SMT/hyper-threading).<p>However, Oracle eventually decided to focus on single-threaded performance with its more recent chips - it turns out that no OoO and < 2 GHz nominal speeds look pretty bad for many server applications. My suspicion is that even though the CPU-bound parts of games are becoming more multi-threaded, AMD will be forced to fix its slower architecture or lose out to Intel again in the server AND high-end desktop markets in a few years.