These FPGAs are absolutely _massive_ (in terms of available resources). AWS isn't messing around.<p>To put things into practical perspective my company sells an FPGA based solution that applies our video enhancement technology in real-time to any video streams up to 1080p60 (our consumer product handles HDMI in and out). It's a world class algorithm with complex calculations, generating 3D information and saliency maps on the fly. I crammed that beast into a Cyclone 4 with 40K LEs.<p>It's hard to translate the "System Logic Cells" metric that Xilinx uses to measure these FPGAs, but a pessimistic calculation puts it at about 1.1 million LEs. That's over 27 times the logic my real-time video enhancement algorithm uses. With just one of these FPGAs we could run our algorithm on 6 4K60 4:4:4 streams at once. That's insane.<p>For another estimation, my rough calculations show that each FPGA would be able to do about 7 GH/s mining Bitcoin. Not an impressive figure by today's standards, but back when FPGA mining was a thing the best I ever got out of an FPGA was 500 MH/s per chip (on commercially viable devices).<p>I'm very curious what Amazon is going to charge for these instances. FPGAs of that size are incredibly expensive (5 figures each). Xilinx no doubt gave them a special deal, in exchange for the opportunity to participate in what could be a very large market. AWS has the potential to push a lot of volume for FPGAs that traditionally had very poor volume. IntelFPGA will no doubt fight exceptionally hard to win business from Azure or Google Cloud.<p>* Take all these estimates with a grain of salt. Most recent "advancements" in FPGA density are the result of using tricky architectures. FPGAs today are still homogeneous logic, but don't tend to be as fine grained as they were. In other words, they're basically moving from RISC to CISC. So it's always up in the air how well all the logic cells can be utilized for a given algorithm.