I'd like to take a moment to appreciate how utterly monumental it is to have free, instant access to such human ingenuity. This is now first on my reading list.<p>I'm working on a HPC company where everyone is an order or two of magnitude smarter than me. It's fun and overwhelming. If anyone has a recommended study plan for this subject (I have an unstructured background in CS) or 'lighter' complimentary resources, I'd be grateful to read them.
> • First of all, as of this writing (late 2010), GPUs are attached processors, for instance over a PCI-X
bus, so any data they operate on has to be transferred from the CPU.<p>I think we got GPUDirect RDMA circa 2013. How time flies!
A very timely submission!<p>I've been looking into performance optimizations on heterogeneous multicore systems and much of what I've seen published recently on Arxiv seems to point to tasks, their granularity and their scheduling as increasingly important.<p>This book mentions but doesn't spend a lot of space on these subjects. It will be very interesting to see how it all evolves.
So, is deep learning eating HPC's lunch?<p>And if it is, how much of this involves deep learning buying computer at a cheaper rate than HPC?<p>Most serious HPC involves simulating something (weather, atomic particles, car crashes). Lately, there has been a lot of work using neural networks to approximate such simulations more effectively. But would these make sense if the HPC program ran on GPUs to start with.
I’ve had a lot of dealing with the folks at TACC in my day job, and their work is pretty amazing. Add in a Top10 supercomputer and you have something pretty impressive.