TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

FPGAs and Deep Machine Learning

65 pointsby chclauover 8 years ago

7 comments

inetseeover 8 years ago
The history of Artificial Intelligence goes back much further than the "early 1980s". It goes at least back to the conference at Dartmouth in 1956, if not back to Turing's 1950 paper, "Computing Machinery and Intelligence".
评论 #12389673 未加载
jcbeardover 8 years ago
So what&#x27;s changed that FPGA&#x27;s are making a come back? They were all the rage in 2008&#x2F;9...however that faded relatively quickly when people realized programming them was a hassle and that programming had direct implications to synthesizability&#x2F;clock speed&#x2F;area. Looks like Google Trends shows a downward spiral in interest from 2004 to present, so my intuition was close. OpenCL-&gt;HDL and Vivado&#x27;s C&#x2F;C++ -&gt; HDL makes the process a bit easier, but it&#x27;s still not as easy as writing optimized C code.<p>To get perf on an FPGA over a hard core you must go very wide (as in lots of parallelism). I suspect you could order custom hard cores from places like Tensilica and get far better perf&#x2F;watt. I love FPGA&#x27;s, they thrive on parallel integer&#x2F;fixed point codes if enough time is put into designing the pipeline. It seems for most float heavy codes a hard core unit at a higher clock rate with a dedicated MMU is much better? Am I missing something that changes that?
评论 #12389911 未加载
评论 #12389481 未加载
评论 #12390359 未加载
doc_hollidayover 8 years ago
It&#x27;s excellent to see FPGAs being increasingly utilised.<p>As moore&#x27;s law no longer holds true for CPU, there is increasing interest turning to FPGAs. You can create a truly bespoke Processor for any task.<p>Of course this is trade of between development time &#x2F; cost vs processing needs of task.<p>I suspect FPGAs demand to only increse over the coming years. Intel&#x27;s integration of FPGA into their line of processors is a promising step and sign of where things may be heading.
评论 #12389386 未加载
评论 #12389203 未加载
评论 #12391985 未加载
vonnikover 8 years ago
There is literally no new information in this article about DL and FPGAs.
fnord123over 8 years ago
I had discussed this with some Deep Learning experts at the local University and they believed that FPGAs won&#x27;t be as powerful as people seem to think since the many-many connections between blocks will not be up to the task.<p>In any event, Nerabus is a company (by the same guys as CodeThink) which is interested in running FPGA in the cloud: <a href="http:&#x2F;&#x2F;nerabus.com" rel="nofollow">http:&#x2F;&#x2F;nerabus.com</a> I&#x27;m not sure if they got off the ground with the idea or if they were too early or what.
daveloyallover 8 years ago
Hi, chclau, I see you are the author and you are present in this comment thread.<p>Your post smells like spam. However, other posts on the same blog have content: <a href="https:&#x2F;&#x2F;fpgasite.wordpress.com&#x2F;2016&#x2F;08&#x2F;09&#x2F;pseudo-random-generator-tutorial&#x2F;" rel="nofollow">https:&#x2F;&#x2F;fpgasite.wordpress.com&#x2F;2016&#x2F;08&#x2F;09&#x2F;pseudo-random-gene...</a><p>What&#x27;s up? What is this blog for?
评论 #12390694 未加载
nlover 8 years ago
Note that as far as I&#x27;m aware all the custom FPGA solutions (and even Google&#x27;s TPU which I think is ASIC) are really only being used for inference, and only &quot;beat&quot; GPUs on a efficiency basis.<p>That&#x27;s interesting and useful, but the real bottleneck with deep learning is the training stage. AFAIK everyone is still doing that on GPUs.
评论 #12390009 未加载