I worked on a project where the (large) customer had some legacy requirements about percentage of "CPU" our application was allowed to use. The requirement was written back in the days when a single computer really only had one core, and once things like that are written it's hard to get them unwritten.<p>For our application (heavily numeric, very well behaved cache access), turning on hyperthreading only increased real performance by about 10% (measured as work completed per unit of time). However, we settled on a metric where we defined CPU use to be load average divided by number of cores. Doubling the number of cores the system showed in top allowed us to meet the required margin.<p>So from a bureaucratic point of view, hyperthreading was a 100% improvement.
Flagging this as it's an absurdly shallow article apparently combining about 10 minutes of "research" after hearing something on twitter with conflating typical end user use-cases and an entire technology. The "tuning" and "oh noes my VMs this is surely a new problem nobody doing virtualization has ever thought of" section is too absurd to even bother with. But for the security aspect it's worth pointing out that in many, if not <i>most</i> [1], true performance critical environments all code being run is trusted. The system or cluster is dedicated to being given one specific job after another to crunch on, exclusively by authorized users in authenticated ways and outputting exclusively to a controlled channel going off-system. Even if it ever should have a problem, it would merely result in possibly some corruption of data in flight and some downtime as the whole thing was re-imaged, but nothing that would be remotely worse a 15-50% drop in performance (!). For roots sake.<p>----<p>1: where "most" means "in the raw amount of hardware $$$ spent".
Of course SMT makes sense. Why would it not be? Article says that thats because ppl only count the threads in their "cpuinfo" output and get the wrong impression? Intel vulnerabilities are not SMT vulnerabilities per-se, they are side channel attacks on a specific SMT implementation.
Given the mismatch between memory latency and how fast a cpu can actually run when it does have data, SMT still does make sense, sometimes, for some kinds of system. bigger better caches make it less useful and security... well. "Ownership" of ones computational environment is a metaphysical debate now, this is just one more bullet point on the list.
In the linked article about ghk's talk, you find this tidbit: "If you're not using a supported distro, or a stable long-term kernel, you have an insecure system. It's that simple. All those embedded devices out there, that are not updated, totally easy to break."<p>Is he still talking about SMT, or just poor security of Linux in general?<p>I'm wondering about this since "all those embedded devices out there" that I can think of are not running CPUs with SMT.
In my mind, SMT made more sense when core counts were low. These days, desktop use cases can more often run out of threads to run than places to run them. Server use cases can often run more threads, but it might not be useful to run 32 cpu threads if your NICs can only properly run 16 queues.
Some benchmarks for orientation: <a href="https://www.anandtech.com/show/11544/intel-skylake-ep-vs-amd-epyc-7000-cpu-battle-of-the-decade/15" rel="nofollow">https://www.anandtech.com/show/11544/intel-skylake-ep-vs-amd...</a>
This makes me wonder how SMT handled in linux kernel especially on cpu-idle and scheduling then I have found below articles, sharing for those who is also interested in:<p>1- Rock and a hard place: How hard it is to be a CPU idle-time governor <a href="https://lwn.net/Articles/793372/" rel="nofollow">https://lwn.net/Articles/793372/</a><p>2- Many uses for Core scheduling <a href="https://lwn.net/Articles/799454/" rel="nofollow">https://lwn.net/Articles/799454/</a>
I would say that one of the major performance boosts of Zen over Bulldozer is the introduction of real SMT due to the expiration of the patents. Bulldozer had CMT which is not the same technique.<p>CMT vs SMT (very simplified view): <a href="https://i.imgur.com/AcZnipK.png" rel="nofollow">https://i.imgur.com/AcZnipK.png</a><p>As you can see, with CMT, you have the same amount of ALUs than with SMT but a single thread can only use its dedicated ALU leaving the other one useless meanwhile SMT allows a single thread to use all ALUs.
It's certainly good for Amazon, where they pawn off a thread as a "vCPU".<p>If SMT dies off, it would be a pretty big margin hit for them.
How will SMT evolve with the frequency down-clocking required by AVX-512? Might a thread be penalized because it happens to be executed concurrently with a AVX-512 thread on the same score?
FYSA, SMT in this context is <i>simultaneous multithreading</i> a.k.a. <i>hyperthreading</i>, not <i>surface mount technology</i>.<p>Hardware folks can safely move on.
I wish that acronyms would be written out if they have multiple meanings in the computer context. My first thought was "how can satisfiability modulo theory ever not make sense?"
Yes, it does. Instruction level vulnerabilities arise from execution of insecure code.<p>If you have to do so, your security is already compromised. Shared hosting, virtualisation, and etc are all insecure by definition.
Intel i5 desktop chips don’t have hyper-threading (SMT) and haven’t for the 10 years they’ve been available. Typically the i7 variant of the same CPU has been about £100 more (roughly 50%). The point about only 5% extra die space makes no difference to the consumer, as there is/was quite a high cost premium on desktops for that feature. Now Intel has removed hyper-threading from most of it’s i7 desktop chips, and you get 2 extra cores over the i5 version instead.