Everybody seems to view this as AMD mimicing Intel when it acquired Altera. (That acquisition has not born visible fruit.)<p>My contrarian speculation is that this is a move driven by Xilinx vs. Nvidia given Nvidia’s purchase of Arm and Xilinx’ push into AI/ML. Xilinx is threatened by Nvidia’s move given their dependence on Arm processors in their SOC chips and their ongoing fight in the AI/ML (including autonomous vehicles) product space. My speculation is that this gives Xilinx an alternative high performance AMD64 (and possibly lower performance & lower power x86) "hard cores" to displace the Arm cores.<p>Interesting times.
What happened recently in AMD's market ?<p>AWS based ARM processor looks to be widely deployed in the cloud.Nvidia, the leader in GPU compute in buying ARM.Intel, which has suffered deeply because of their 10nm fab problems are going to work with TSMC.And AMD's P/E ratio is at 159, higher than Amazon's!<p>So Maybe AMD is looking to convert some inflated stock with a predictable business.<p>And it's better to invest in a predictable business that may have possible synergies with yours. Otherwise it looks bad to the stock market.<p>And Xilinx is probably the biggest company AMD can buy.
<many years ago> when Intel acquired Altera, and announced Xeon CPUs with on-chip FPGAs, I was optimistic that eventually they would add FPGAs to more low-end desktop CPUs (or at least Xeons in the sub-$1000 zone). But it never materialized. I'm slightly optimistic this time around too, but I suspect that the fact that Intel didn't do it hints at some fundamental difficulty.
It is really funny when you find out that Intel uses Xilinx FPGAs for prototyping as they cannot get what they acquired (Altera) working in house to make things work.
I would rather see Processing in Memory (PIM) become mainstream than FPGAs. FPGAs are basically an assembly line that you can change overnight. Excellent at one task and they minimize end to end latency but if it's actually about performance you are entirely dependent on the DSP slices.<p>With PIM your CPU resources grow with the size of your memory. All you have to do is partition your data and then just write regular C code with the only difference being that it is executed by a processor inside your RAM.<p>Having more cores is basically the same thing as having more DSP slices. Since those cores are directly embedded inside memory they have high data locality which is basically the only other benefit FPGAs have over CPUs (assuming same number of DSP and cores). Obviously it's easier to program than either GPUs or FPGAs.
I hope they will not drop their CPLD chips. They were made obsolete at least once but Xilinx fortunately decided to extend the support for a couple of more years. CPLD are very useful for repairing vintage gear where logic components fail and are no longer available (for example custom programmed PALs), so you can describe the logic in Verilog and often solder it in place of multiple chips.
If they drop it then the only way to do it would be to use full blown FPGA which is a bit wasteful.
Xilinx Zynq and Ultrascale series are multiple Ghz ARM cores plus FPGA. They're incredibly useful for small volume niche use cases and to give an example from my industry, becoming popular in space applications. The reason is hardware qualification/verification is extremely expensive but a change to FPGA fabric is not.<p>My point is Xilinx have already proven ARM CPU+FPGA on one die and I think AMD CPU+FPGA is very likely to be a success.<p>Between this, ARM adoption, Apple Silicon and similar offerings (which kind of skipped ARM+FPGA for ARM+ASIC), RISC-V, it's like 1992 again with exciting architectures. Only this time software abstraction is much better so there is not a huge pressure to converge on only 1-2 architectures.
Could be interesting. I prefer an independent Xilinx, but maybe competition with intel will stimulate the whole reconfigurable computing revolution that fizzled out.
I understand that they need a big push in DPU market, but I do not understand why companies as big as AMD do not invest and build what they need in house? If anyone can, it is AMD that can gather the talent. Everyone was talking about future data centers, and as far as I can tell I have been hearing about heterogeneous IO since 2009 (and that's me, and I was hearing it while working on Xen).<p>To asnwer my question maybe the market is so volatile that they cannot do strategic planning like that?
Hmm, this was rumored but I guess now it is actually happening. Nice bump on the share price there I guess, it’s currently trading at around $115 and it seems to be converted to $143 in AMD. I assume this is to help AMD push more into the server and ML compute spaces?
Probably makes sense as a business decision.<p>In my opinion, I would also like for AMD to invest in ML tooling while they have the cash.<p>I hope one day Pytorch, XLA, Glow would have native AMDGPU support, and I will be able to buy a couple Radeon 6000 series cards, undervolt them and make a good ML box.<p>I think AMD gpus on TSMC 7nm, then maybe even 5nm, will have the best performance or watt. Even though they might be 10% or 20% slower than the alternative. For me performance per watt and dollar is more important.<p>Anyway, it's sad that they couldn't make a 5 to 10 people (I might be too optimistic) engineering team that would make their product relevant in this market.
I'd like to see consumer-level CPU + GPU + FPGA products that emulators could take advantage of. I'm thinking of floating point math for PS2 right now, but I'm sure there are other examples where an FPGA could be beneficial.
Are FPGAs any good seems to be the discussion. Well they must be good for something because:<p>"... Across the different market segments where it operates, Xilinx brought in revenues of $767 million last quarter."<p><a href="https://siliconangle.com/2020/10/27/official-amd-snap-xilinx-35b/" rel="nofollow">https://siliconangle.com/2020/10/27/official-amd-snap-xilinx...</a>
The apparent driver here isn’t about AMD wanting to get into the FPGA business. The real motivation appears to be a combination of platforms and programmable chiplets. There are two problems that programmable chips address <a href="https://semiengineering.com/amd-wants-an-fpga-company-too/" rel="nofollow">https://semiengineering.com/amd-wants-an-fpga-company-too/</a>
For those in ASIC and chip design industry, the two of the largest chip companies namely Intel and AMD buying two of the largest FPGA companies is inevitable, it's just a matter of "when" rather than "if".<p>I think the more interesting news is what they are going to do pro-actively with these mergers rather than just sitting on it.<p>I really hope their respective CEOs will take a page from the open source Linux/Android and GCC/LLVM revolutions. I'd say the chip makers companies are the ones that benefit most (largest beneficiary) from the these open source movement not the end users. To understand this situation we need to understand the economic rules of complementary goods or commodity [1].<p>In the case of chip makers if the price of designing/researching/maintaining OS like Linux/Android and the compilers infrastructure is minimized (i.e. close to zero) they can basically sell the hardware of their processors at a premium price with handsome profits. If on another hand, the OSes and the compilers are expensive, their profit will be inversely proportional to the complementary elements' (e.g. OSes & compilers) prices.<p>Unfortunately as of now, the design tools or CAD software for hardware design and programming, and also parallel processing design tools are prohibitively expensive, disjointed and cumbersome (hence expensive manpower), and if you're in the industry you know that it's not an exaggeration.<p>Having said that, I think it's the best for Intel/AMD and the chip design industry to fund and promote robust free and open source software development tools for their ASIC design including CPU/GPU/TPU/FPGA combo design.<p>IMHO, ETH Zurich's LLHD [2] and Chris Lattner's LLVM effort on MLIR [3] are moving in the right direction for pushing the envelope and consolidation of these tools (i.e. one design tool to rule them all). If any Intel or AMD guys are reading this you guys need to knock your CEO/CTO's doors and convinced them to make these complementary commodity (design and programming tools) as good and as cheap as possible or better free.<p>[1]<a href="https://www.jstor.org/stable/2352194?seq=1" rel="nofollow">https://www.jstor.org/stable/2352194?seq=1</a><p>[2]<a href="https://iis.ee.ethz.ch/research/research-groups/Digital%20Circuits%20and%20Systems/current-projects/epi.html" rel="nofollow">https://iis.ee.ethz.ch/research/research-groups/Digital%20Ci...</a><p>[3]<a href="https://llvm.org/devmtg/2019-04/slides/Keynote-ShpeismanLattner-MLIR.pdf" rel="nofollow">https://llvm.org/devmtg/2019-04/slides/Keynote-ShpeismanLatt...</a>
This comment is off topic, but while I'm listening to the earnings call, I don't hear about specifically official PyTorch and Tensorflow support for AMD graphic cards. All the questions and answers are generic with buzzwords like AI, doubling down on our software support, but it doesn't give me confidence to change my NVIDIA GPU to an AMD one for the foreseeable future.<p>I remember the time when Elon Musk said to an analyst that he's asking boring questions to fill in his spreadsheet, and I'm feeling the same thing while listening to the earnings call.