I have been looking forward to this happening for a long time. A quick search on my past comment shows that was 2016. And I remember checking Xilinx market cap last year and it was ~$30+ B, while AMD was only ~$50 B. I was thinking the chance of that <i>ever</i> happening is going to be very slim. ( I <i>was</i> an AMD Shareholder, but I never thought it would be $100B mark cap company, sold it all last year) Now AMD is at $100B+, while Xilinx is at Sub $30B. The market is crazy.<p>But things has changed since 2016 - 2018. I used to see the move to FPGA as going offensive attack in the server space, now both AMD, and Intel are doing what ever they can to fence off ARM.
Going back to 2006, Xilinx was making FPGAs that would sit in an Opteron socket and speak HyperTransport to the other CPU(s).<p>I never heard much more about that, but I can imagine it might've been popular with certain specialized workloads, in an era before GPU compute took off, or perhaps workloads that don't fit GPUs well.<p>I wonder if those specialized applications might be a small but important market.
As a value multiplying investment AMD should buy a company that has programmers that can code a half-decent CUDA compatible layer for its graphics cards.
5 years ago amd was a $2 stock and xilinx was a 50 dollar stock. Now amd is a 80 dollar stock and Xilinxis ~100 dollar stock. Xilinx market cap was 5x of amd. Even today xilinx generates more free cash than amd. Its kind of crazy how fast company values can change.
Feels like a datacenter play for specialized compute. Given AMDs domination of server workloads in $/performance, seems like doubling down on that advantage. FPGA is already used today to offload SSL termination from the CPU.
The best outcome for me would be more open FPGA tools, maybe even documented bitstream format<p>This could just be a strategic acquisition for internal purposes: I bet a lot of design and architecture validation is done on big FPGAs. Maybe they wanted some custom ones?
I'm unsure if this type of merger is good for consumers. Will AMD and Xilinx make better products together, or does competition drive them more efficiently. I tend to believe the latter.
Intel does not seem to have capitalized on the Altera acquisition but maybe that is also related to Intel's fab issues.<p>From an operation perspective, this combines two non-overlapping TSMC customers that can potentially negotiate better wafer prices together. I assume that AMD has significantly higher wafer counts than Xilinx so this would primarily benefit the Xilinx business.<p>From a technology perspective, this acquisition may succeed where the Intel/Altera acquisition fizzled due to AMD's chiplet approach. Swapping a processor core for an FPGA chiplet might be an "easy" win in some markets.
I think this is a terrible idea for the future of technology. AMD would not be existentially dependent on the FPGA market and the stack is so important to cutting-edge tech that it demands focused stewardship. It's clear something needs to change in FPGA land. Xilinx and Intel both produce abhorrent FPGA software causing project delays and cancellations of cutting-edge technology dependent on their devices. Xilinx probably needs an activist investor to force their software to move past the time in history when Java was something you put on your resume. Maybe AMD can be that, but I doubt it and an acquisition by AMD would likely result in the same abandonment we see from Intel towards Altera.<p>Xilinx had an opportunity to be in the position NVidia is in today and it was not obvious in 2005 who was going to win the high performance computing (HPC) market because the inherent advantages that FPGAs had and still have today for high speed I/O, RAM throughput, and hard timing requirements. NVidia's CUDA, on the other hand, produces generally painlessly portable results, with easy improvement on new devices. They have essentially won in HPC application development.<p>Fundamentally the FPGA companies need to adopt an open-source, cloud-first ethos towards their stacks and especially be focused on making agile low-level application development. Making a counter control some LEDs should be a 10 second process. Changing the clock speed should be a 1 seconds process. Adding a button to reset the counter should be a 1 second process. None of this is true. Good luck getting this to work within one day as a newbie on a vanilla machine. Good luck even installing Vivado and compiling any bitstream in one day (thank God they have AWS images -- good luck getting that going within 2 hours). Good luck coordinating with source control and generally merging work with a team.<p>There was some hope when Intel bought Altera that the compilers engineering expertise might help improve virtual CPU stacks, or at least some investment in the usability of their software tools might have been thought of as a competitive advantage, but all the FPGA vendors have seriously dropped the ball on improving the eco-system with usable software. FPGA software is so bad that you have to justify a 6 month development cycle for something that should probably take a few days if things were even remotely sane. It's hard to enumerate all the examples of broken-ness but small changes end up taking a long time to re-build, things like inverting your reset in a module and then waiting two hours for trivial changes to resynthesize and place-and-route. (edit, two hours later the trivial reset polarity inversion failed timing, because it also required me to add synchronization stages to send the inverted reset signal across clock domains, shouldn't be doing this at night).
I would quite like an FPGA on board my CPU. Intel have what used to be called Altera, but they don't seem to have done much publicly with it in relation to their CPU business.
Aside from the technology, this merger probably means a bunch more downward pressure on jobs/salaries for silicon hardware people doesn't it? The consolidations in the chip industry generally have produced an oversupply of redundant roles after mergers, such that wages in silicon jobs are much lower than in software, right? This only adds more.
At least Xilinx will be able to fall back to AMD’s amd64 or x86 architecture if NVIDIA will limit ARM core licensing. The new SoC products like ZynQ Ultrascale+ with ARM cores are really great.
I'm really very bearish on this idea. When Intel bought Altera there were really 3 issues. The first was that they drove growth by bundling their FPGA stuff with their fabrication services - so like someone like Ericson could do half their design on FPGA and then slowly move to ASIC. That was good for Intel because it won them business, but it didn't really do anything for FPGAs.<p>The second thing was bundling FPGAs into the same package as a CPU. There were two problems with this, firstly it's a load of dark silicon when you aren't using the FPGA part and you have to make a load of tricky decisions like "Am I going to design my thermals to let me run all the Xeon cores at max speed whilst running my FPGA". The other problem being that it's really bloody hard figuring out the programming model, where the FPGA wants a deterministic data flow and you've got these CPU nutters throwing memory at you out of order, stalling and screwing you up with really bizarre cache behaviour. Since the CPU guys designed the interface it leaves the FPGA developer an almost impossible task to build efficient processing pipelines.<p>Finally we've got the moonshot - the idea that you could come up with a high level design language to programme FPGAs like software. Intel very heavily invested in this when they bought Altera but I'm still not seeing any forward progress on this, and let's be clear, Intel poured <i>huge</i> resources into making that happen. The last I heard they had over 100 engineers on that project and the sum total of their achievement is a handful of "partners" who wrote OpenCL/HLS/whatever and then worked with the engineers to go through a grueling process re-writing their code over and over and over until it looked like the RTL they already wanted. At one point Intel were going to re-write their entire FPGA video IP suite in HLS, I don't know how they went, but the acquisition of Omnitek probably wasn'ta good sign. It's been 4 years since the acquisition, and they were working on it long before then. The project is still basically just a load of marketing guff on their website that no one can actually use. I would say with the innevitable restructuring that Intel will have to do due to their various other fuck ups, this project is on thin ice.<p>The problem is that AMD is so likely to fall into the trap of points 2 and 3, and we're going to lose the final big independent FPGA company so that AMD can kill themselves trying to compete with Nvidia. And the real danger is that whilst all that's happening, actual innovation will disappear in the FPGA space. Xilinx's ACAPs are actually interesting (one way of solving the programming model), but Intel's last piece of innovation in the FPGA space died with the failure of hyperflex.
It's quite an interesting world where AMD has one of the 8 cores on a chip-let be a Xilinx FPGA. The only issue is they are not easy enough to program but maybe AMD can get out ahead of Nvidia on this one and define the software stack here in the way Nvidia did with CUDA.
So when Intel bought Altera their Stratix-10 was announced to be manufactured on Intel's fab. Predictably (Intel is not an open fab business) this massively delayed Stratix-10.<p>At least AMD is fabless so there should be no such issue with Xilinx.<p>Both Altera and Xilinx sell premium devices in terms of cost. If you make a high volume product based on an FPGA and care about profit margin, you are better off with Lattice (assuming their devices are performant enough for your application). It would be nice if any of these acquisitions made the FPGA prices more competitive, but I doubt it.
I'm surprised with that move of AMD, there is nothing special in Xilinx except their 'super protected' compilers. Dunno why people think that there is some sophisticated magic inside FPGA chips.<p>GHz limits force companies to move more and more into parallel computing and FPGA might seem the right move, but it is only partially a right move because for computations we don't need to simulate gates, we need arithmetic operations thus rather something like Field Programming ALU Array (FPALUA)
Has Altera done that well for Intel's bottom line? I haven't been watching that closely.<p>I would love an RFSOC type peripheral though. Sort of a Spectrum Processing Unit or SPU. If you extrapolate the cu:Signal work to its logical extreme of a purpose built auxiliary processing unit the possibilities are pretty amazing.<p>[1] <a href="https://github.com/rapidsai/cusignal" rel="nofollow">https://github.com/rapidsai/cusignal</a>
This could work if AMD invested a ton in software like Intel has with OneAPI.<p>But, if AMD really wants to get into a new market, it could try going into mobile. The 4000-series CPUs are great laptops CPUs which could go further down the scale, and over a couple generations maybe in phones too. Unlikely or stupid idea (RIP Broxton)? Still worth a shot, I feel.
Imagine a reconfigurable GPU.<p>How many f8, f16, f32, f64 units you want now? Do you need some special instructions like add+multiply, or some bit permutation? You can add some.<p>There's no need to reconfigure it very fast, or per VM; not having to reboot the hypervisor OS would be enough. But it e.g. would allow to change the type of instances a particular server can offer, adjust to demand, and so overprovision less.