These FPGAs are absolutely _massive_ (in terms of available resources). AWS isn't messing around.<p>To put things into practical perspective my company sells an FPGA based solution that applies our video enhancement technology in real-time to any video streams up to 1080p60 (our consumer product handles HDMI in and out). It's a world class algorithm with complex calculations, generating 3D information and saliency maps on the fly. I crammed that beast into a Cyclone 4 with 40K LEs.<p>It's hard to translate the "System Logic Cells" metric that Xilinx uses to measure these FPGAs, but a pessimistic calculation puts it at about 1.1 million LEs. That's over 27 times the logic my real-time video enhancement algorithm uses. With just one of these FPGAs we could run our algorithm on 6 4K60 4:4:4 streams at once. That's insane.<p>For another estimation, my rough calculations show that each FPGA would be able to do about 7 GH/s mining Bitcoin. Not an impressive figure by today's standards, but back when FPGA mining was a thing the best I ever got out of an FPGA was 500 MH/s per chip (on commercially viable devices).<p>I'm very curious what Amazon is going to charge for these instances. FPGAs of that size are incredibly expensive (5 figures each). Xilinx no doubt gave them a special deal, in exchange for the opportunity to participate in what could be a very large market. AWS has the potential to push a lot of volume for FPGAs that traditionally had very poor volume. IntelFPGA will no doubt fight exceptionally hard to win business from Azure or Google Cloud.<p>* Take all these estimates with a grain of salt. Most recent "advancements" in FPGA density are the result of using tricky architectures. FPGAs today are still homogeneous logic, but don't tend to be as fine grained as they were. In other words, they're basically moving from RISC to CISC. So it's always up in the air how well all the logic cells can be utilized for a given algorithm.
If you don't click through to read about this: you can write an FPGA image in verilog/VHDL and upload it... and then run it. To me that seems like magic.<p>HDK here: <a href="https://github.com/aws/aws-fpga" rel="nofollow">https://github.com/aws/aws-fpga</a><p>(I work for AWS)
> Today we are launching a developer preview of the new F1 instance. In addition to building applications and services for your own use, you will be able to package them up for sale and reuse in AWS Marketplace.<p>Wow. An app store for FPGA IPs and the infrastructure to enable anyone to use it. That's really cool.
I'm surprised that no one has linked to <a href="http://opencores.org/" rel="nofollow">http://opencores.org/</a> opencores yet. They've got a ton of vhdl code under various open licenses. The project's been around since forever and is probably a good place to start if you're curious about fpga programming.
OVH is testing Altera chips - ALTERA Arria 10 GX 1150 FPGA Chip<p><a href="https://www.runabove.com/FPGAaaS.xml" rel="nofollow">https://www.runabove.com/FPGAaaS.xml</a>
If anyone is wondering how the FPGA board looks like<p><a href="https://imgur.com/a/wUTIp" rel="nofollow">https://imgur.com/a/wUTIp</a>
Here's a post by Bunnie Huang, from a few months ago saying that Moore's law is dead and we will now have more of such stuff -
<a href="http://spectrum.ieee.org/semiconductors/design/the-death-of-moores-law-will-spur-innovation" rel="nofollow">http://spectrum.ieee.org/semiconductors/design/the-death-of-...</a><p>Pretty interesting read. Also, kudos to AWS !
For my institute this is going to be _really_ useful for Genomics data processing because we can't justify buying expensive hardware for undergrad research. Using a FPGA hardware over cloud sounds almost magical!
The traditional EDA tool companies (Mentory, Cadence, Synopsys) all tried offering their tools under a could/SaaS model a few years back and nobody went for it. Chip designers are too paranoid about their source code leaking. I wonder if that attitude will hamper adoption of this model as well?
Quick Question: If anyone wants to learn programming an FPGA is learning C only way to go ? how hard is to learn and program in verilog/VHDL without electrical background ?<p>If anyone suggests links or books, please do<p>Thank You
Very interesting. I'd still like to see the JVM pick up the FPGA as a possible compile target, that way people could run apps that seamlessly used the FPGA where appropriate. I have mentioned this to Intel, who are promoting this technology (and also have a team that contributes to the JVM), but so far no one is stating publicly that they are working on such a thing.
This is amazing! We have been developing a tool called Rigel at Stanford (<a href="http://rigel-fpga.org" rel="nofollow">http://rigel-fpga.org</a>) to make it much easier to develop image processing pipelines for FPGA. We have seem some really significant speedups vs CPUs/GPUs [1].<p>[1] <a href="http://www.graphics.stanford.edu/papers/rigel/" rel="nofollow">http://www.graphics.stanford.edu/papers/rigel/</a>
Given that the Amazon cloud is such a huge consumer of Intel's X86 processors, even using Amazon-tailored Xeon's, it's surprising that Amazon chose Xilinx over the Intel-owned Altera.<p>These Xilinx 16nm Virtex FPGA's are beasts, but Altera has some compelling choices as well. Perhaps some of the hardened IP in the Xilinx tipped the scales, such as the H.265 encode/decode, 100G EMAC, PCI-E Gen 4?
I'm a total FPGA n00b, so here's a dumb question: what <i>can</i> you do with this FPGA that you can't with a GPU?<p>OK, here's a concrete question: I have a vector of 64 floats. I want to multiply it with a matrix of size 64xN, where N is on the order of 1 billion. How fast can I do this multiplication, and find the top K elements of the resulting N-dimensional array?
Does this mean that ML on FPGA's will be more common? Can someone comment on viability of this? Would there be speedup and if so would it be large enough to warrant rewriting it all in VHDL/Verilog?
Bitcoin mining. WPA2 brute forcing.<p>Maybe someone will finally find the triple-des password used at adobe for password hashing.<p>The possibilities are endless :)
So know anyone can run their High Frequency Trading business on their side :-P.<p>So much easier than buying hardware. Also deep learning works sometimes similarly. It's easier to play with on AWS with their hourly billing than buying hardware for many use cases.
> Xilinx UltraScale+ VU9P fabricated using a 16 nm process.<p>> 64 GiB of ECC-protected memory on a 288-bit wide bus (four DDR4 channels).<p>> Dedicated PCIe x16 interface to the CPU.<p>Does anyone know whether this is likely to be a plug-in card? and can I buy one to plug in to a local machine for testing?
For complex designs the simulator that comes with the Vivado tools (Mentor's modelsim) is not going to cut it. I wonder if they are working on deals with Mentor (or competitors Cadence and Synopsys) to license their full-featured simulators.<p>Even better, maybe Amazon (and others getting into this space like Intel and Microsoft) will put their weight behind an open source VHDL/Verilog simulator. A few exist but they are pretty slow and way behind the curve in language support. Heck, maybe they can drive adoption of one of the up-and-coming HDL's like chisel, or create one even better. A guy can dream...
I'd be interested in practical use cases that come to your mind (like someone who commented about genomics data processing for a university).<p>What could YOU use this for professionally?<p>(I certainly always wanted to play around with an FPGA for fun...)
Anyone have a hardware ZLIB implementation that I can drop into my Python toolchains as a direct replacement for ZLIB to compress web-server responses with no latency?<p>Could also use a fast JPG encoder/decoder as well.
FPGA Instances are a game changer in every way.<p>Let this day be known as the beginning of the end general-compute infrastructure for internet scale services.
wow. that's what was going through my mind reading this article but it quickly dawned upon me (and sad) that I probably won't be able to build anything with it as we are not solving problems that require programmable hardware but euphoric nonetheless to see this kind of innovation coming from AWS.