TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Write hybrid CPU/GPU programs in Haskell

131 pointsby donsabout 13 years ago

6 comments

tmurrayabout 13 years ago
(insert standard disclaimer about being responsible for CUDA here)<p>I'm definitely happy to see more languages with GPU support, but schedulers to distribute work between CPUs and GPUs are a particular interest of mine. The most full-featured I've seen is StarPU:<p><a href="http://runtime.bordeaux.inria.fr/StarPU/" rel="nofollow">http://runtime.bordeaux.inria.fr/StarPU/</a><p>But there's still a lot of work to be done; it would be very interesting to remove the need for the developer to estimate time spent on CPU (or one type of processor) versus time spent on GPU and see the effects on developer productivity, for example.
评论 #3931663 未加载
mericabout 13 years ago
My final year thesis was on implementing some algorithms with Accelerate, and one of the things I noted was that on a 2009 Macbook Pro (256 megabyte integrated Nvidia GPU), a single threaded C program runs faster than using Accelerate, even when all it does is multiplying each element of an array by two. The performance discrepancy is even greater for more complicated problems. So, before you jump in to use this and expect better performance on embarrassingly parallel problems, make sure your Nvida GPU is not integrated and has lots of memory.<p>Of course this new package is different because it uses both CPU/GPU...<p>I also found Accelerate programs hard to debug. You cannot use "trace" to print out stuff during computation because that is a CPU instruction.
评论 #3932027 未加载
评论 #3933855 未加载
kaosjesterabout 13 years ago
My wife worked on this a bit with Adam (<a href="https://twitter.com/#!/acfoltzer" rel="nofollow">https://twitter.com/#!/acfoltzer</a>) and Ryan. There is a pending submission to ICFP.<p>The reason they went with CUDA was to plug into Accelerate's existing framework without redeveloping the entire wheel. As meric mentioned, Accelerate is a pain to do anything with and you can bet dollars to do syntax that this package will generate the hard parts for you.<p>IIRC, ParFunk also has some nice framework in place for distributed computation (though I'm not certain it's completely in working order yet).
wtracyabout 13 years ago
Holy cow, I didn't even know that we had Haskell -&#62; CUDA compilation working. Very awesome stuff!
tikhonjabout 13 years ago
That's really cool.<p>I also like how the blog post is available as a literate Haskell file. I think that's a great way to make an introduction more useful, and I wish more languages would take an approach like that for different articles.
hypervisorabout 13 years ago
Where is the OpenCL version?
评论 #3931279 未加载