What a useless article.<p>The big problem is software people thinking that they have any concept of actual hardware design.<p>If they understood hardware, they would understand that an FPGA is the least efficient way to accomplish anything.<p>Routing is sparser than any chip. You burn 10-100x the transistors to do the same task. FPGA's are hot and slow.<p>Even for signal processing, an FPGA is going to be quite hard pressed to beat a 2.0GHz ARM with Neon extensions unless it is <i>very</i> expensive and your algorithm is very dataflow oriented. How many ARM's can I put on a board for $10,000-$100,000 (the very highest end FPGA's)?<p>You use an FPGA because you have a low-volume application that you can't do any other way, and your application has enough margin that you can eat the cost of the FPGA. And you are always looking to wipe out that FPGA and replace it with a microprocessor because it is so much cheaper and easier to deal with.
I've heard several computational physicists make this complaint to NVIDIA sales reps. The standard response, which I'm sure is correct, goes as follows.<p>Designing a fast processor is very expensive, far beyond the means of the research community. The only way anyone can afford it is to sell millions of the things to gamers. To put $1 of special hardware on your numerical card, we have to put it on 1000 graphics cards too, so you'd have to pay $1000 for it. Bad luck: scientists are destined to hack hardware that was designed for larger markets.
> FPGAs are legacy baggage in the same way that GPGPUs are.<p>I hoped the author would expand on this point.<p>It is also my impression that GPGPU are just "a hack": they should had been normal coprocessors to the main CPU, just like the FPU and the vector units are. It seems that now we are finally reaching that model (in Linux the graphics device is almost completely separated from the computational device, although they are on the same physical device most of the time) but we are still far from the "Comprocessor extension" opcode space of MIPS processors or to the "brain and arms" of CELL (1 generic CPU, many specialized coprocessors).
FPGAs would be more attractive if they weren't so over priced... good thing that patents are around to almost completely eliminate competition in that space.
I wish he would comment more on what he finds wrong with HDLs?<p>I fail to understand why using a HDL for a digital ASIC is fine, but using one for a FPGA in the context of acceleration is not.
Yes, RTL level of abstraction is a way too low, even for most of the ASIC things. Yes, we need higher level HDLs (more abstract than the said Chisel and Bluespec). I'm working on it, stay tuned.<p>But what I cannot get from this article is what is exactly wrong with the current FPGAs design? They've got DSP slices (i.e., ALU macros), they've got block RAMs and all the routing facilities one can imagine. For the dataflow stuff it's more than enough.<p>Of course it would have been much better if the vendors published the detailed datasheets for all the available cells and the interconnect, for the bitfile formats, etc. - to make it possible for the alternative, open source toolchains to appear. Yes, their existing toolchains are, well, clumsy. But it is still quite possible to abstract away from the peculiarities of these toolchains.